Test Report: Docker_macOS 16144

                    
                      5a8f8cb541418da3ae1b3ffd9c263e271e7d084b:2023-03-31:28590
                    
                

Test fail (15/318)

x
+
TestErrorSpam/setup (24.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-497000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-497000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 --driver=docker : (24.46457795s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0"
error_spam_test.go:110: minikube stdout:
* [nospam-497000] minikube v1.29.0 on Darwin 13.3
- MINIKUBE_LOCATION=16144
- KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node nospam-497000 in cluster nospam-497000
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-497000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
--- FAIL: TestErrorSpam/setup (24.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (283.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-457000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0331 10:33:27.203185    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:33:54.884639    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:34:30.610491    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:30.616828    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:30.628495    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:30.650526    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:30.690691    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:30.771339    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:30.932249    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:31.253423    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:31.893687    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:33.175913    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:35.737547    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:40.857523    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:34:51.097265    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:35:11.576667    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 10:35:52.536526    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-457000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m43.354196479s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-457000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-457000 in cluster ingress-addon-legacy-457000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 23.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 10:31:45.057184    5849 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:31:45.057358    5849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:31:45.057364    5849 out.go:309] Setting ErrFile to fd 2...
	I0331 10:31:45.057368    5849 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:31:45.057477    5849 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 10:31:45.058874    5849 out.go:303] Setting JSON to false
	I0331 10:31:45.078960    5849 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1873,"bootTime":1680282032,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 10:31:45.079046    5849 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 10:31:45.101176    5849 out.go:177] * [ingress-addon-legacy-457000] minikube v1.29.0 on Darwin 13.3
	I0331 10:31:45.144244    5849 notify.go:220] Checking for updates...
	I0331 10:31:45.166103    5849 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 10:31:45.187355    5849 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 10:31:45.209318    5849 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 10:31:45.231267    5849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 10:31:45.253361    5849 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 10:31:45.275347    5849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 10:31:45.297446    5849 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 10:31:45.361744    5849 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 10:31:45.361869    5849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:31:45.549811    5849 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-31 17:31:45.414600331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:31:45.571477    5849 out.go:177] * Using the docker driver based on user configuration
	I0331 10:31:45.593495    5849 start.go:295] selected driver: docker
	I0331 10:31:45.593515    5849 start.go:859] validating driver "docker" against <nil>
	I0331 10:31:45.593534    5849 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 10:31:45.597716    5849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:31:45.784198    5849 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-31 17:31:45.651468128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:31:45.784304    5849 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0331 10:31:45.784488    5849 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0331 10:31:45.806320    5849 out.go:177] * Using Docker Desktop driver with root privileges
	I0331 10:31:45.828215    5849 cni.go:84] Creating CNI manager for ""
	I0331 10:31:45.828252    5849 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 10:31:45.828269    5849 start_flags.go:319] config:
	{Name:ingress-addon-legacy-457000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:31:45.850167    5849 out.go:177] * Starting control plane node ingress-addon-legacy-457000 in cluster ingress-addon-legacy-457000
	I0331 10:31:45.872247    5849 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 10:31:45.894184    5849 out.go:177] * Pulling base image ...
	I0331 10:31:45.935900    5849 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0331 10:31:45.935902    5849 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 10:31:45.996873    5849 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon, skipping pull
	I0331 10:31:45.996899    5849 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in daemon, skipping load
	I0331 10:31:46.024226    5849 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0331 10:31:46.024266    5849 cache.go:57] Caching tarball of preloaded images
	I0331 10:31:46.024673    5849 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0331 10:31:46.046729    5849 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0331 10:31:46.089373    5849 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:31:46.293815    5849 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0331 10:32:05.501284    5849 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:32:05.501486    5849 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:32:06.111794    5849 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0331 10:32:06.112068    5849 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/config.json ...
	I0331 10:32:06.112102    5849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/config.json: {Name:mk3baf5080fa5ad5490b7f0cb96ea49e087f0496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:32:06.112415    5849 cache.go:193] Successfully downloaded all kic artifacts
	I0331 10:32:06.112442    5849 start.go:364] acquiring machines lock for ingress-addon-legacy-457000: {Name:mka3aa3fa4fa4d3e8d69dca89b130757bad772a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 10:32:06.112591    5849 start.go:368] acquired machines lock for "ingress-addon-legacy-457000" in 142.175µs
	I0331 10:32:06.112613    5849 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-457000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-457000 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 10:32:06.112678    5849 start.go:125] createHost starting for "" (driver="docker")
	I0331 10:32:06.135154    5849 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0331 10:32:06.135509    5849 start.go:159] libmachine.API.Create for "ingress-addon-legacy-457000" (driver="docker")
	I0331 10:32:06.135552    5849 client.go:168] LocalClient.Create starting
	I0331 10:32:06.135749    5849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem
	I0331 10:32:06.135821    5849 main.go:141] libmachine: Decoding PEM data...
	I0331 10:32:06.135851    5849 main.go:141] libmachine: Parsing certificate...
	I0331 10:32:06.135970    5849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem
	I0331 10:32:06.136018    5849 main.go:141] libmachine: Decoding PEM data...
	I0331 10:32:06.136036    5849 main.go:141] libmachine: Parsing certificate...
	I0331 10:32:06.157359    5849 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-457000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0331 10:32:06.220371    5849 cli_runner.go:211] docker network inspect ingress-addon-legacy-457000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0331 10:32:06.220485    5849 network_create.go:281] running [docker network inspect ingress-addon-legacy-457000] to gather additional debugging logs...
	I0331 10:32:06.220506    5849 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-457000
	W0331 10:32:06.277891    5849 cli_runner.go:211] docker network inspect ingress-addon-legacy-457000 returned with exit code 1
	I0331 10:32:06.277917    5849 network_create.go:284] error running [docker network inspect ingress-addon-legacy-457000]: docker network inspect ingress-addon-legacy-457000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-457000
	I0331 10:32:06.277934    5849 network_create.go:286] output of [docker network inspect ingress-addon-legacy-457000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-457000
	
	** /stderr **
	I0331 10:32:06.278029    5849 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0331 10:32:06.336469    5849 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001152f10}
	I0331 10:32:06.336510    5849 network_create.go:123] attempt to create docker network ingress-addon-legacy-457000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0331 10:32:06.336587    5849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-457000 ingress-addon-legacy-457000
	I0331 10:32:06.425142    5849 network_create.go:107] docker network ingress-addon-legacy-457000 192.168.49.0/24 created
	I0331 10:32:06.425176    5849 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-457000" container
	I0331 10:32:06.425289    5849 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0331 10:32:06.482023    5849 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-457000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-457000 --label created_by.minikube.sigs.k8s.io=true
	I0331 10:32:06.541512    5849 oci.go:103] Successfully created a docker volume ingress-addon-legacy-457000
	I0331 10:32:06.541656    5849 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-457000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-457000 --entrypoint /usr/bin/test -v ingress-addon-legacy-457000:/var gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -d /var/lib
	I0331 10:32:06.998306    5849 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-457000
	I0331 10:32:06.998358    5849 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0331 10:32:06.998372    5849 kic.go:190] Starting extracting preloaded images to volume ...
	I0331 10:32:06.998484    5849 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-457000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -I lz4 -xf /preloaded.tar -C /extractDir
	I0331 10:32:12.984793    5849 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-457000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -I lz4 -xf /preloaded.tar -C /extractDir: (5.986517197s)
	I0331 10:32:12.984823    5849 kic.go:199] duration metric: took 5.986722 seconds to extract preloaded images to volume
	I0331 10:32:12.984953    5849 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0331 10:32:13.221645    5849 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-457000 --name ingress-addon-legacy-457000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-457000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-457000 --network ingress-addon-legacy-457000 --ip 192.168.49.2 --volume ingress-addon-legacy-457000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55
	I0331 10:32:13.592630    5849 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-457000 --format={{.State.Running}}
	I0331 10:32:13.654506    5849 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-457000 --format={{.State.Status}}
	I0331 10:32:13.721374    5849 cli_runner.go:164] Run: docker exec ingress-addon-legacy-457000 stat /var/lib/dpkg/alternatives/iptables
	I0331 10:32:13.828999    5849 oci.go:144] the created container "ingress-addon-legacy-457000" has a running status.
	I0331 10:32:13.829025    5849 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa...
	I0331 10:32:13.980609    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0331 10:32:13.980687    5849 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0331 10:32:14.088192    5849 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-457000 --format={{.State.Status}}
	I0331 10:32:14.147917    5849 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0331 10:32:14.147935    5849 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-457000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0331 10:32:14.257937    5849 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-457000 --format={{.State.Status}}
	I0331 10:32:14.316383    5849 machine.go:88] provisioning docker machine ...
	I0331 10:32:14.316419    5849 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-457000"
	I0331 10:32:14.316510    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:14.376134    5849 main.go:141] libmachine: Using SSH client type: native
	I0331 10:32:14.376511    5849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 50351 <nil> <nil>}
	I0331 10:32:14.376526    5849 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-457000 && echo "ingress-addon-legacy-457000" | sudo tee /etc/hostname
	I0331 10:32:14.520676    5849 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-457000
	
	I0331 10:32:14.520775    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:14.581776    5849 main.go:141] libmachine: Using SSH client type: native
	I0331 10:32:14.582113    5849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 50351 <nil> <nil>}
	I0331 10:32:14.582129    5849 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-457000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-457000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-457000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 10:32:14.718606    5849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 10:32:14.718631    5849 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16144-2324/.minikube CaCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16144-2324/.minikube}
	I0331 10:32:14.718660    5849 ubuntu.go:177] setting up certificates
	I0331 10:32:14.718680    5849 provision.go:83] configureAuth start
	I0331 10:32:14.718771    5849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-457000
	I0331 10:32:14.777734    5849 provision.go:138] copyHostCerts
	I0331 10:32:14.777780    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 10:32:14.777847    5849 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem, removing ...
	I0331 10:32:14.777854    5849 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 10:32:14.777967    5849 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem (1078 bytes)
	I0331 10:32:14.778120    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 10:32:14.778161    5849 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem, removing ...
	I0331 10:32:14.778165    5849 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 10:32:14.778235    5849 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem (1123 bytes)
	I0331 10:32:14.778340    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 10:32:14.778379    5849 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem, removing ...
	I0331 10:32:14.778384    5849 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 10:32:14.778446    5849 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem (1679 bytes)
	I0331 10:32:14.778555    5849 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-457000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-457000]
	I0331 10:32:14.946990    5849 provision.go:172] copyRemoteCerts
	I0331 10:32:14.947080    5849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 10:32:14.947139    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:15.009221    5849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50351 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa Username:docker}
	I0331 10:32:15.104746    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0331 10:32:15.104825    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 10:32:15.122060    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0331 10:32:15.122139    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0331 10:32:15.139035    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0331 10:32:15.139108    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0331 10:32:15.155996    5849 provision.go:86] duration metric: configureAuth took 437.323419ms
	I0331 10:32:15.156009    5849 ubuntu.go:193] setting minikube options for container-runtime
	I0331 10:32:15.156165    5849 config.go:182] Loaded profile config "ingress-addon-legacy-457000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0331 10:32:15.156230    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:15.215766    5849 main.go:141] libmachine: Using SSH client type: native
	I0331 10:32:15.216112    5849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 50351 <nil> <nil>}
	I0331 10:32:15.216130    5849 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 10:32:15.350454    5849 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0331 10:32:15.350469    5849 ubuntu.go:71] root file system type: overlay
	I0331 10:32:15.350553    5849 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 10:32:15.350633    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:15.410811    5849 main.go:141] libmachine: Using SSH client type: native
	I0331 10:32:15.411157    5849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 50351 <nil> <nil>}
	I0331 10:32:15.411206    5849 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 10:32:15.556321    5849 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 10:32:15.556417    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:15.615979    5849 main.go:141] libmachine: Using SSH client type: native
	I0331 10:32:15.616325    5849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 50351 <nil> <nil>}
	I0331 10:32:15.616339    5849 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 10:32:16.212898    5849 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-03-27 16:16:18.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 17:32:15.554765842 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0331 10:32:16.212928    5849 machine.go:91] provisioned docker machine in 1.896608799s
	I0331 10:32:16.212937    5849 client.go:171] LocalClient.Create took 10.07783478s
	I0331 10:32:16.212958    5849 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-457000" took 10.077907464s
	I0331 10:32:16.212971    5849 start.go:300] post-start starting for "ingress-addon-legacy-457000" (driver="docker")
	I0331 10:32:16.212976    5849 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 10:32:16.213063    5849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 10:32:16.213135    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:16.274598    5849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50351 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa Username:docker}
	I0331 10:32:16.371211    5849 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 10:32:16.374783    5849 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0331 10:32:16.374800    5849 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0331 10:32:16.374807    5849 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0331 10:32:16.374812    5849 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0331 10:32:16.374820    5849 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/addons for local assets ...
	I0331 10:32:16.374920    5849 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/files for local assets ...
	I0331 10:32:16.375112    5849 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> 28002.pem in /etc/ssl/certs
	I0331 10:32:16.375119    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> /etc/ssl/certs/28002.pem
	I0331 10:32:16.375313    5849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 10:32:16.382643    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /etc/ssl/certs/28002.pem (1708 bytes)
	I0331 10:32:16.399517    5849 start.go:303] post-start completed in 186.534732ms
	I0331 10:32:16.400032    5849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-457000
	I0331 10:32:16.461599    5849 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/config.json ...
	I0331 10:32:16.462029    5849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 10:32:16.462086    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:16.521220    5849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50351 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa Username:docker}
	I0331 10:32:16.614863    5849 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0331 10:32:16.619284    5849 start.go:128] duration metric: createHost completed in 10.507073997s
	I0331 10:32:16.619303    5849 start.go:83] releasing machines lock for "ingress-addon-legacy-457000", held for 10.507180286s
	I0331 10:32:16.619385    5849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-457000
	I0331 10:32:16.679302    5849 ssh_runner.go:195] Run: cat /version.json
	I0331 10:32:16.679331    5849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 10:32:16.679368    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:16.679401    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:16.741098    5849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50351 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa Username:docker}
	I0331 10:32:16.746094    5849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50351 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa Username:docker}
	W0331 10:32:16.885320    5849 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	I0331 10:32:16.885397    5849 ssh_runner.go:195] Run: systemctl --version
	I0331 10:32:16.890363    5849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0331 10:32:16.895441    5849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0331 10:32:16.915266    5849 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0331 10:32:16.915335    5849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0331 10:32:16.929135    5849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0331 10:32:16.936932    5849 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0331 10:32:16.936951    5849 start.go:481] detecting cgroup driver to use...
	I0331 10:32:16.936962    5849 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 10:32:16.937033    5849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 10:32:16.949792    5849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0331 10:32:16.958331    5849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 10:32:16.966584    5849 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 10:32:16.966642    5849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 10:32:16.975114    5849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 10:32:16.983406    5849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 10:32:16.991770    5849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 10:32:17.001645    5849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 10:32:17.009854    5849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 10:32:17.018231    5849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 10:32:17.025430    5849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 10:32:17.032437    5849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 10:32:17.096945    5849 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 10:32:17.177162    5849 start.go:481] detecting cgroup driver to use...
	I0331 10:32:17.177181    5849 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 10:32:17.177244    5849 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 10:32:17.187697    5849 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0331 10:32:17.187762    5849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 10:32:17.197981    5849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 10:32:17.211546    5849 ssh_runner.go:195] Run: which cri-dockerd
	I0331 10:32:17.215593    5849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 10:32:17.223588    5849 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 10:32:17.237918    5849 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 10:32:17.332884    5849 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 10:32:17.427992    5849 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 10:32:17.428018    5849 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 10:32:17.441449    5849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 10:32:17.531699    5849 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 10:32:17.742437    5849 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 10:32:17.768330    5849 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 10:32:17.818292    5849 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.2 ...
	I0331 10:32:17.818483    5849 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-457000 dig +short host.docker.internal
	I0331 10:32:17.942097    5849 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0331 10:32:17.942221    5849 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0331 10:32:17.946965    5849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 10:32:17.956714    5849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:32:18.093148    5849 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0331 10:32:18.093224    5849 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 10:32:18.113953    5849 docker.go:639] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0331 10:32:18.113980    5849 docker.go:645] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0331 10:32:18.114053    5849 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 10:32:18.124390    5849 ssh_runner.go:195] Run: which lz4
	I0331 10:32:18.128989    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0331 10:32:18.129169    5849 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0331 10:32:18.133566    5849 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0331 10:32:18.133606    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0331 10:32:24.269822    5849 docker.go:603] Took 6.141015 seconds to copy over tarball
	I0331 10:32:24.269904    5849 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0331 10:32:26.573937    5849 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.304105342s)
	I0331 10:32:26.573954    5849 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0331 10:32:26.652237    5849 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 10:32:26.660696    5849 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0331 10:32:26.673352    5849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 10:32:26.739863    5849 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 10:32:27.793955    5849 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0541193s)
	I0331 10:32:27.794052    5849 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 10:32:27.813899    5849 docker.go:639] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0331 10:32:27.813911    5849 docker.go:645] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0331 10:32:27.813921    5849 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0331 10:32:27.823604    5849 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0331 10:32:27.824297    5849 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0331 10:32:27.825513    5849 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 10:32:27.826073    5849 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0331 10:32:27.826256    5849 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0331 10:32:27.826665    5849 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0331 10:32:27.830319    5849 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0331 10:32:27.833941    5849 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0331 10:32:27.838257    5849 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error: No such image: registry.k8s.io/pause:3.2
	I0331 10:32:27.843182    5849 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error: No such image: registry.k8s.io/coredns:1.6.7
	I0331 10:32:27.843398    5849 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0331 10:32:27.843575    5849 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0331 10:32:27.844582    5849 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 10:32:27.845167    5849 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0331 10:32:27.845413    5849 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0331 10:32:27.846849    5849 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error: No such image: registry.k8s.io/etcd:3.4.3-0
	I0331 10:32:28.963529    5849 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0331 10:32:28.984262    5849 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0331 10:32:28.984307    5849 docker.go:313] Removing image: registry.k8s.io/pause:3.2
	I0331 10:32:28.984376    5849 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0331 10:32:29.005536    5849 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0331 10:32:29.156189    5849 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0331 10:32:29.176715    5849 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0331 10:32:29.176745    5849 docker.go:313] Removing image: registry.k8s.io/coredns:1.6.7
	I0331 10:32:29.176800    5849 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0331 10:32:29.198551    5849 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0331 10:32:29.379744    5849 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0331 10:32:29.401376    5849 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0331 10:32:29.401411    5849 docker.go:313] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0331 10:32:29.401477    5849 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0331 10:32:29.423162    5849 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0331 10:32:29.425572    5849 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0331 10:32:29.446375    5849 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0331 10:32:29.446416    5849 docker.go:313] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0331 10:32:29.446491    5849 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0331 10:32:29.466830    5849 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0331 10:32:30.085001    5849 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 10:32:30.364396    5849 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0331 10:32:30.386130    5849 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0331 10:32:30.386156    5849 docker.go:313] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0331 10:32:30.386220    5849 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0331 10:32:30.405627    5849 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0331 10:32:30.492904    5849 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0331 10:32:30.513716    5849 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0331 10:32:30.513741    5849 docker.go:313] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0331 10:32:30.513812    5849 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0331 10:32:30.532315    5849 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0331 10:32:30.665542    5849 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0331 10:32:30.686711    5849 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0331 10:32:30.686734    5849 docker.go:313] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0331 10:32:30.686799    5849 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0331 10:32:30.706846    5849 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0331 10:32:30.706891    5849 cache_images.go:92] LoadImages completed in 2.893089394s
	W0331 10:32:30.706974    5849 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0331 10:32:30.707043    5849 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 10:32:30.732549    5849 cni.go:84] Creating CNI manager for ""
	I0331 10:32:30.732565    5849 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 10:32:30.732579    5849 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 10:32:30.732593    5849 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-457000 NodeName:ingress-addon-legacy-457000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 10:32:30.732719    5849 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-457000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 10:32:30.732778    5849 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-457000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 10:32:30.732839    5849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0331 10:32:30.740738    5849 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 10:32:30.740794    5849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 10:32:30.748177    5849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0331 10:32:30.760969    5849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0331 10:32:30.774068    5849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0331 10:32:30.786962    5849 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0331 10:32:30.790717    5849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 10:32:30.800493    5849 certs.go:56] Setting up /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000 for IP: 192.168.49.2
	I0331 10:32:30.800517    5849 certs.go:186] acquiring lock for shared ca certs: {Name:mk1ddc355573fb6044e73c93dd0e9bf4bae32052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:32:30.800769    5849 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key
	I0331 10:32:30.800836    5849 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key
	I0331 10:32:30.800876    5849 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/client.key
	I0331 10:32:30.800888    5849 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/client.crt with IP's: []
	I0331 10:32:30.961394    5849 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/client.crt ...
	I0331 10:32:30.961407    5849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/client.crt: {Name:mkf6a5432e51517b0857a6be03cda27060a3a068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:32:30.961798    5849 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/client.key ...
	I0331 10:32:30.961812    5849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/client.key: {Name:mk42c4ea12d0a460a88e179662496609205c2036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:32:30.962082    5849 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.key.dd3b5fb2
	I0331 10:32:30.962096    5849 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0331 10:32:31.147757    5849 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.crt.dd3b5fb2 ...
	I0331 10:32:31.147768    5849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.crt.dd3b5fb2: {Name:mk8c27326988eb9ee226f04e23908879b10d5ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:32:31.148066    5849 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.key.dd3b5fb2 ...
	I0331 10:32:31.148073    5849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.key.dd3b5fb2: {Name:mk79a1c9b98da78650aa97cf01aee5b1e5cf489e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:32:31.148301    5849 certs.go:333] copying /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.crt
	I0331 10:32:31.148481    5849 certs.go:337] copying /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.key
	I0331 10:32:31.148656    5849 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.key
	I0331 10:32:31.148672    5849 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.crt with IP's: []
	I0331 10:32:31.229052    5849 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.crt ...
	I0331 10:32:31.229061    5849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.crt: {Name:mk88d47871001d47f2f97b4cde9fe6861c994a73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:32:31.229281    5849 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.key ...
	I0331 10:32:31.229289    5849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.key: {Name:mke8627f1e136da6184c8392d5d8f3138f200f82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:32:31.229493    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0331 10:32:31.229525    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0331 10:32:31.229547    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0331 10:32:31.229569    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0331 10:32:31.229591    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0331 10:32:31.229613    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0331 10:32:31.229632    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0331 10:32:31.229651    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0331 10:32:31.229746    5849 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem (1338 bytes)
	W0331 10:32:31.229802    5849 certs.go:397] ignoring /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800_empty.pem, impossibly tiny 0 bytes
	I0331 10:32:31.229814    5849 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem (1679 bytes)
	I0331 10:32:31.229848    5849 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem (1078 bytes)
	I0331 10:32:31.229879    5849 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem (1123 bytes)
	I0331 10:32:31.229915    5849 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem (1679 bytes)
	I0331 10:32:31.229983    5849 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem (1708 bytes)
	I0331 10:32:31.230014    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem -> /usr/share/ca-certificates/2800.pem
	I0331 10:32:31.230034    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> /usr/share/ca-certificates/28002.pem
	I0331 10:32:31.230058    5849 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0331 10:32:31.230521    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 10:32:31.249154    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0331 10:32:31.266449    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 10:32:31.283641    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/ingress-addon-legacy-457000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 10:32:31.300969    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 10:32:31.317967    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0331 10:32:31.335042    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 10:32:31.352354    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 10:32:31.369494    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem --> /usr/share/ca-certificates/2800.pem (1338 bytes)
	I0331 10:32:31.386957    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /usr/share/ca-certificates/28002.pem (1708 bytes)
	I0331 10:32:31.404105    5849 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 10:32:31.421236    5849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 10:32:31.434073    5849 ssh_runner.go:195] Run: openssl version
	I0331 10:32:31.439660    5849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2800.pem && ln -fs /usr/share/ca-certificates/2800.pem /etc/ssl/certs/2800.pem"
	I0331 10:32:31.447886    5849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2800.pem
	I0331 10:32:31.451909    5849 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/2800.pem
	I0331 10:32:31.451948    5849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2800.pem
	I0331 10:32:31.457326    5849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2800.pem /etc/ssl/certs/51391683.0"
	I0331 10:32:31.465580    5849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28002.pem && ln -fs /usr/share/ca-certificates/28002.pem /etc/ssl/certs/28002.pem"
	I0331 10:32:31.473765    5849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28002.pem
	I0331 10:32:31.477810    5849 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/28002.pem
	I0331 10:32:31.477855    5849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28002.pem
	I0331 10:32:31.483633    5849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28002.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 10:32:31.492193    5849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 10:32:31.500231    5849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 10:32:31.504221    5849 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 10:32:31.504262    5849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 10:32:31.509690    5849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 10:32:31.517827    5849 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-457000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:32:31.517933    5849 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 10:32:31.536847    5849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 10:32:31.544509    5849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 10:32:31.551917    5849 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 10:32:31.551970    5849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 10:32:31.559662    5849 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 10:32:31.559694    5849 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 10:32:31.606094    5849 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0331 10:32:31.606160    5849 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 10:32:31.772426    5849 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 10:32:31.772522    5849 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 10:32:31.772595    5849 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 10:32:31.924075    5849 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 10:32:31.924618    5849 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 10:32:31.924657    5849 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0331 10:32:31.995863    5849 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 10:32:32.017702    5849 out.go:204]   - Generating certificates and keys ...
	I0331 10:32:32.017803    5849 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 10:32:32.017866    5849 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 10:32:32.128396    5849 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0331 10:32:32.459374    5849 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0331 10:32:32.546800    5849 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0331 10:32:32.695631    5849 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0331 10:32:32.786973    5849 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0331 10:32:32.787121    5849 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-457000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0331 10:32:33.070675    5849 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0331 10:32:33.070824    5849 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-457000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0331 10:32:33.375171    5849 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0331 10:32:33.496062    5849 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0331 10:32:33.603443    5849 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0331 10:32:33.603522    5849 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 10:32:33.893364    5849 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 10:32:33.939005    5849 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 10:32:34.033654    5849 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 10:32:34.158722    5849 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 10:32:34.159210    5849 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 10:32:34.179732    5849 out.go:204]   - Booting up control plane ...
	I0331 10:32:34.179895    5849 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 10:32:34.180068    5849 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 10:32:34.180183    5849 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 10:32:34.180355    5849 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 10:32:34.180661    5849 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 10:33:14.166185    5849 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 10:33:14.166958    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:33:14.167168    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:33:19.168319    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:33:19.168549    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:33:29.169424    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:33:29.169668    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:33:49.170316    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:33:49.170539    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:34:29.171452    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:34:29.171672    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:34:29.171684    5849 kubeadm.go:322] 
	I0331 10:34:29.171751    5849 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0331 10:34:29.171815    5849 kubeadm.go:322] 		timed out waiting for the condition
	I0331 10:34:29.171835    5849 kubeadm.go:322] 
	I0331 10:34:29.171909    5849 kubeadm.go:322] 	This error is likely caused by:
	I0331 10:34:29.171937    5849 kubeadm.go:322] 		- The kubelet is not running
	I0331 10:34:29.172036    5849 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 10:34:29.172058    5849 kubeadm.go:322] 
	I0331 10:34:29.172159    5849 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 10:34:29.172187    5849 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0331 10:34:29.172262    5849 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0331 10:34:29.172278    5849 kubeadm.go:322] 
	I0331 10:34:29.172368    5849 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 10:34:29.172446    5849 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0331 10:34:29.172456    5849 kubeadm.go:322] 
	I0331 10:34:29.172507    5849 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0331 10:34:29.172544    5849 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0331 10:34:29.172595    5849 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0331 10:34:29.172620    5849 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0331 10:34:29.172630    5849 kubeadm.go:322] 
	I0331 10:34:29.174960    5849 kubeadm.go:322] W0331 17:32:31.605560    1452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0331 10:34:29.175143    5849 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 10:34:29.175226    5849 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 10:34:29.175354    5849 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
	I0331 10:34:29.175430    5849 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 10:34:29.175539    5849 kubeadm.go:322] W0331 17:32:34.163026    1452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0331 10:34:29.175637    5849 kubeadm.go:322] W0331 17:32:34.163733    1452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0331 10:34:29.175709    5849 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 10:34:29.175781    5849 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0331 10:34:29.175951    5849 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-457000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-457000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0331 17:32:31.605560    1452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0331 17:32:34.163026    1452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0331 17:32:34.163733    1452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-457000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-457000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0331 17:32:31.605560    1452 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0331 17:32:34.163026    1452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0331 17:32:34.163733    1452 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0331 10:34:29.175988    5849 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0331 10:34:29.589228    5849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 10:34:29.599159    5849 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 10:34:29.599212    5849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 10:34:29.606755    5849 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 10:34:29.606798    5849 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 10:34:29.655410    5849 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0331 10:34:29.655454    5849 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 10:34:29.821992    5849 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 10:34:29.822090    5849 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 10:34:29.822174    5849 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 10:34:29.975410    5849 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 10:34:29.975933    5849 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 10:34:29.975976    5849 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0331 10:34:30.047979    5849 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 10:34:30.069731    5849 out.go:204]   - Generating certificates and keys ...
	I0331 10:34:30.069841    5849 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 10:34:30.069902    5849 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 10:34:30.069976    5849 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0331 10:34:30.070020    5849 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0331 10:34:30.070104    5849 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0331 10:34:30.070156    5849 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0331 10:34:30.070222    5849 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0331 10:34:30.070283    5849 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0331 10:34:30.070361    5849 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0331 10:34:30.070428    5849 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0331 10:34:30.070462    5849 kubeadm.go:322] [certs] Using the existing "sa" key
	I0331 10:34:30.070502    5849 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 10:34:30.240062    5849 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 10:34:30.467542    5849 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 10:34:30.666823    5849 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 10:34:30.799066    5849 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 10:34:30.799460    5849 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 10:34:30.821125    5849 out.go:204]   - Booting up control plane ...
	I0331 10:34:30.821377    5849 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 10:34:30.821502    5849 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 10:34:30.821588    5849 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 10:34:30.821735    5849 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 10:34:30.822007    5849 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 10:35:10.806966    5849 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 10:35:10.807751    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:35:10.807902    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:35:15.809477    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:35:15.809681    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:35:25.810325    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:35:25.810484    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:35:45.811514    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:35:45.811714    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:36:25.810288    5849 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 10:36:25.810450    5849 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 10:36:25.810462    5849 kubeadm.go:322] 
	I0331 10:36:25.810509    5849 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0331 10:36:25.810540    5849 kubeadm.go:322] 		timed out waiting for the condition
	I0331 10:36:25.810544    5849 kubeadm.go:322] 
	I0331 10:36:25.810622    5849 kubeadm.go:322] 	This error is likely caused by:
	I0331 10:36:25.810672    5849 kubeadm.go:322] 		- The kubelet is not running
	I0331 10:36:25.810763    5849 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 10:36:25.810776    5849 kubeadm.go:322] 
	I0331 10:36:25.810899    5849 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 10:36:25.810945    5849 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0331 10:36:25.810972    5849 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0331 10:36:25.810978    5849 kubeadm.go:322] 
	I0331 10:36:25.811064    5849 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 10:36:25.811127    5849 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0331 10:36:25.811134    5849 kubeadm.go:322] 
	I0331 10:36:25.811210    5849 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0331 10:36:25.811250    5849 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0331 10:36:25.811307    5849 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0331 10:36:25.811346    5849 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0331 10:36:25.811357    5849 kubeadm.go:322] 
	I0331 10:36:25.813673    5849 kubeadm.go:322] W0331 17:34:29.654670    3859 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0331 10:36:25.813832    5849 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 10:36:25.813935    5849 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 10:36:25.814051    5849 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
	I0331 10:36:25.814141    5849 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 10:36:25.814245    5849 kubeadm.go:322] W0331 17:34:30.804184    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0331 10:36:25.814340    5849 kubeadm.go:322] W0331 17:34:30.805070    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0331 10:36:25.814404    5849 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 10:36:25.814471    5849 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0331 10:36:25.814496    5849 kubeadm.go:403] StartCluster complete in 3m54.307261106s
	I0331 10:36:25.814598    5849 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 10:36:25.835087    5849 logs.go:277] 0 containers: []
	W0331 10:36:25.835100    5849 logs.go:279] No container was found matching "kube-apiserver"
	I0331 10:36:25.835176    5849 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 10:36:25.854460    5849 logs.go:277] 0 containers: []
	W0331 10:36:25.854473    5849 logs.go:279] No container was found matching "etcd"
	I0331 10:36:25.854541    5849 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 10:36:25.875068    5849 logs.go:277] 0 containers: []
	W0331 10:36:25.875082    5849 logs.go:279] No container was found matching "coredns"
	I0331 10:36:25.875158    5849 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 10:36:25.895501    5849 logs.go:277] 0 containers: []
	W0331 10:36:25.895515    5849 logs.go:279] No container was found matching "kube-scheduler"
	I0331 10:36:25.895581    5849 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 10:36:25.914698    5849 logs.go:277] 0 containers: []
	W0331 10:36:25.914713    5849 logs.go:279] No container was found matching "kube-proxy"
	I0331 10:36:25.914797    5849 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 10:36:25.934562    5849 logs.go:277] 0 containers: []
	W0331 10:36:25.934575    5849 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 10:36:25.934645    5849 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 10:36:25.954289    5849 logs.go:277] 0 containers: []
	W0331 10:36:25.954303    5849 logs.go:279] No container was found matching "kindnet"
	I0331 10:36:25.954310    5849 logs.go:123] Gathering logs for kubelet ...
	I0331 10:36:25.954321    5849 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 10:36:25.992164    5849 logs.go:123] Gathering logs for dmesg ...
	I0331 10:36:25.992178    5849 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 10:36:26.004323    5849 logs.go:123] Gathering logs for describe nodes ...
	I0331 10:36:26.004337    5849 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 10:36:26.058270    5849 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 10:36:26.058286    5849 logs.go:123] Gathering logs for Docker ...
	I0331 10:36:26.058297    5849 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 10:36:26.085809    5849 logs.go:123] Gathering logs for container status ...
	I0331 10:36:26.085825    5849 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 10:36:28.130036    5849 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044291964s)
	W0331 10:36:28.130160    5849 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0331 17:34:29.654670    3859 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0331 17:34:30.804184    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0331 17:34:30.805070    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0331 10:36:28.130177    5849 out.go:239] * 
	* 
	W0331 10:36:28.130313    5849 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0331 17:34:29.654670    3859 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0331 17:34:30.804184    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0331 17:34:30.805070    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0331 17:34:29.654670    3859 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0331 17:34:30.804184    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0331 17:34:30.805070    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 10:36:28.130333    5849 out.go:239] * 
	* 
	W0331 10:36:28.130963    5849 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0331 10:36:28.209522    5849 out.go:177] 
	W0331 10:36:28.272767    5849 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0331 17:34:29.654670    3859 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0331 17:34:30.804184    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0331 17:34:30.805070    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0331 17:34:29.654670    3859 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0331 17:34:30.804184    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0331 17:34:30.805070    3859 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 10:36:28.272886    5849 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0331 10:36:28.272973    5849 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0331 10:36:28.294544    5849 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-457000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (283.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (90.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-457000 addons enable ingress --alsologtostderr -v=5
E0331 10:37:14.455188    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-457000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m30.492914147s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 10:36:28.441581    6304 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:36:28.441874    6304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:36:28.441880    6304 out.go:309] Setting ErrFile to fd 2...
	I0331 10:36:28.441884    6304 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:36:28.441993    6304 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 10:36:28.463976    6304 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0331 10:36:28.485334    6304 config.go:182] Loaded profile config "ingress-addon-legacy-457000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0331 10:36:28.485371    6304 addons.go:66] Setting ingress=true in profile "ingress-addon-legacy-457000"
	I0331 10:36:28.485385    6304 addons.go:228] Setting addon ingress=true in "ingress-addon-legacy-457000"
	I0331 10:36:28.485484    6304 host.go:66] Checking if "ingress-addon-legacy-457000" exists ...
	I0331 10:36:28.486451    6304 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-457000 --format={{.State.Status}}
	I0331 10:36:28.567466    6304 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0331 10:36:28.588607    6304 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0331 10:36:28.609673    6304 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0331 10:36:28.630444    6304 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0331 10:36:28.651781    6304 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0331 10:36:28.651802    6304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0331 10:36:28.651917    6304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:36:28.714032    6304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50351 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa Username:docker}
	I0331 10:36:28.815614    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:28.879052    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:28.879096    6304 retry.go:31] will retry after 298.267886ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:29.178377    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:29.231811    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:29.231838    6304 retry.go:31] will retry after 386.27706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:29.619130    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:29.671394    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:29.671413    6304 retry.go:31] will retry after 718.911847ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:30.390981    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:30.444599    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:30.444627    6304 retry.go:31] will retry after 740.40113ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:31.185334    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:31.240135    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:31.240153    6304 retry.go:31] will retry after 1.27875239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:32.521143    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:32.576943    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:32.576964    6304 retry.go:31] will retry after 2.380702584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:34.958655    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:35.012397    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:35.012420    6304 retry.go:31] will retry after 3.011102275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:38.025663    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:38.082154    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:38.082173    6304 retry.go:31] will retry after 5.882385305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:43.964945    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:44.018875    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:44.018900    6304 retry.go:31] will retry after 8.130896205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:52.150024    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:36:52.202018    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:36:52.202036    6304 retry.go:31] will retry after 9.545419408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:01.747528    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:37:01.804002    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:01.804026    6304 retry.go:31] will retry after 15.642056691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:17.447648    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:37:17.502792    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:17.502820    6304 retry.go:31] will retry after 16.091195408s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:33.595574    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:37:33.648701    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:33.648723    6304 retry.go:31] will retry after 25.074107347s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:58.721999    6304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0331 10:37:58.776917    6304 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:58.776954    6304 addons.go:464] Verifying addon ingress=true in "ingress-addon-legacy-457000"
	I0331 10:37:58.798582    6304 out.go:177] * Verifying ingress addon...
	I0331 10:37:58.820862    6304 out.go:177] 
	W0331 10:37:58.842250    6304 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-457000" does not exist: client config: context "ingress-addon-legacy-457000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-457000" does not exist: client config: context "ingress-addon-legacy-457000" does not exist]
	W0331 10:37:58.842269    6304 out.go:239] * 
	* 
	W0331 10:37:58.845194    6304 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0331 10:37:58.866186    6304 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-457000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-457000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6",
	        "Created": "2023-03-31T17:32:13.286002778Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T17:32:13.58495732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/hostname",
	        "HostsPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/hosts",
	        "LogPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6-json.log",
	        "Name": "/ingress-addon-legacy-457000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-457000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-457000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-457000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-457000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-457000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-457000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-457000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "551ef7c80f2d6983e4bdfc7fb45a7be3930fafe9374cb408e1135c46a2f23670",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50351"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50352"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50354"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50355"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/551ef7c80f2d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-457000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d434272d534f",
	                        "ingress-addon-legacy-457000"
	                    ],
	                    "NetworkID": "7fe6d29ddada548d6372c3bdc7cc03de0684b7c3af918c765003cd0c6b4013a4",
	                    "EndpointID": "f1ca27ed1d86ffbe40fe501c301cba5ce50544e0505de29f5958b0615eb6efd4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-457000 -n ingress-addon-legacy-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-457000 -n ingress-addon-legacy-457000: exit status 6 (400.41362ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 10:37:59.341082    6384 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-457000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-457000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (90.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (116.99s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-457000 addons enable ingress-dns --alsologtostderr -v=5
E0331 10:38:27.188174    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:39:30.598098    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-457000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m56.529082313s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 10:37:59.392482    6394 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:37:59.392727    6394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:37:59.392734    6394 out.go:309] Setting ErrFile to fd 2...
	I0331 10:37:59.392738    6394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:37:59.392852    6394 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 10:37:59.415521    6394 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0331 10:37:59.437489    6394 config.go:182] Loaded profile config "ingress-addon-legacy-457000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0331 10:37:59.437529    6394 addons.go:66] Setting ingress-dns=true in profile "ingress-addon-legacy-457000"
	I0331 10:37:59.437544    6394 addons.go:228] Setting addon ingress-dns=true in "ingress-addon-legacy-457000"
	I0331 10:37:59.437618    6394 host.go:66] Checking if "ingress-addon-legacy-457000" exists ...
	I0331 10:37:59.438554    6394 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-457000 --format={{.State.Status}}
	I0331 10:37:59.519903    6394 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0331 10:37:59.544230    6394 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0331 10:37:59.565704    6394 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0331 10:37:59.565729    6394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0331 10:37:59.565834    6394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-457000
	I0331 10:37:59.625779    6394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50351 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/ingress-addon-legacy-457000/id_rsa Username:docker}
	I0331 10:37:59.727682    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:37:59.778069    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:59.778114    6394 retry.go:31] will retry after 170.659942ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:37:59.951012    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:00.008053    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:00.008073    6394 retry.go:31] will retry after 501.001937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:00.510858    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:00.565366    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:00.565393    6394 retry.go:31] will retry after 491.219417ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:01.056838    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:01.110763    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:01.110782    6394 retry.go:31] will retry after 444.528392ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:01.557574    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:01.612650    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:01.612676    6394 retry.go:31] will retry after 1.273913568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:02.888259    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:02.941843    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:02.941866    6394 retry.go:31] will retry after 1.01977451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:03.963888    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:04.016033    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:04.016058    6394 retry.go:31] will retry after 1.640288691s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:05.656579    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:05.710627    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:05.710647    6394 retry.go:31] will retry after 6.131740696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:11.844405    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:11.900097    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:11.900116    6394 retry.go:31] will retry after 7.009248698s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:18.910449    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:18.967397    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:18.967416    6394 retry.go:31] will retry after 7.601870167s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:26.569602    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:26.622703    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:26.622722    6394 retry.go:31] will retry after 19.204966707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:45.829103    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:38:45.883844    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:38:45.883866    6394 retry.go:31] will retry after 26.856087331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:39:12.740986    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:39:12.798114    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:39:12.798139    6394 retry.go:31] will retry after 42.93500498s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:39:55.733461    6394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0331 10:39:55.788156    6394 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0331 10:39:55.809685    6394 out.go:177] 
	W0331 10:39:55.830540    6394 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0331 10:39:55.830559    6394 out.go:239] * 
	* 
	W0331 10:39:55.832900    6394 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0331 10:39:55.853289    6394 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-457000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-457000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6",
	        "Created": "2023-03-31T17:32:13.286002778Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T17:32:13.58495732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/hostname",
	        "HostsPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/hosts",
	        "LogPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6-json.log",
	        "Name": "/ingress-addon-legacy-457000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-457000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-457000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-457000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-457000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-457000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-457000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-457000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "551ef7c80f2d6983e4bdfc7fb45a7be3930fafe9374cb408e1135c46a2f23670",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50351"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50352"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50354"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50355"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/551ef7c80f2d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-457000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d434272d534f",
	                        "ingress-addon-legacy-457000"
	                    ],
	                    "NetworkID": "7fe6d29ddada548d6372c3bdc7cc03de0684b7c3af918c765003cd0c6b4013a4",
	                    "EndpointID": "f1ca27ed1d86ffbe40fe501c301cba5ce50544e0505de29f5958b0615eb6efd4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-457000 -n ingress-addon-legacy-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-457000 -n ingress-addon-legacy-457000: exit status 6 (401.070966ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 10:39:56.328382    6508 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-457000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-457000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (116.99s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:176: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-457000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-457000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6",
	        "Created": "2023-03-31T17:32:13.286002778Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T17:32:13.58495732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/hostname",
	        "HostsPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/hosts",
	        "LogPath": "/var/lib/docker/containers/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6/d434272d534f7c48e88e15d54bd4e93eb0707a4a07691b570184bee2e889dee6-json.log",
	        "Name": "/ingress-addon-legacy-457000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-457000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-457000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30340c0548c62dd7914ba9ac15b738e11661cfcbfd2b4c98275bf9c7eea47b3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-457000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-457000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-457000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-457000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-457000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "551ef7c80f2d6983e4bdfc7fb45a7be3930fafe9374cb408e1135c46a2f23670",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50351"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50352"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50354"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50355"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/551ef7c80f2d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-457000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d434272d534f",
	                        "ingress-addon-legacy-457000"
	                    ],
	                    "NetworkID": "7fe6d29ddada548d6372c3bdc7cc03de0684b7c3af918c765003cd0c6b4013a4",
	                    "EndpointID": "f1ca27ed1d86ffbe40fe501c301cba5ce50544e0505de29f5958b0615eb6efd4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-457000 -n ingress-addon-legacy-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-457000 -n ingress-addon-legacy-457000: exit status 6 (400.504954ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 10:39:56.790695    6520 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-457000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-457000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.423625976.exe start -p running-upgrade-267000 --memory=2200 --vm-driver=docker 
E0331 10:59:30.556897    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.423625976.exe start -p running-upgrade-267000 --memory=2200 --vm-driver=docker : exit status 70 (59.453984291s)

                                                
                                                
-- stdout --
	! [running-upgrade-267000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3848751788
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 17:59:41.631558880 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-267000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:00:00.946690982 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-267000", then "minikube start -p running-upgrade-267000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.87 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.16 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 7.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.66 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 27.27 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 34.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.96 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 63.38 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.66 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 80.54 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 102.04 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 106.57 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 114.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 119.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 131.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 139.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 145.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 153.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.43 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 167.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 222.71 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 230.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 242.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 247.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 255.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 262.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 275.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 289.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 303.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 325.71 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 332.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 337.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 345.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.18 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 368.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 374.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 381.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.24 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 394.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.71 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 417.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 439.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 451.35 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 471.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 497.99 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 506.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.63 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.21 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.43 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:00:00.946690982 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.423625976.exe start -p running-upgrade-267000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.423625976.exe start -p running-upgrade-267000 --memory=2200 --vm-driver=docker : exit status 70 (4.500003644s)

                                                
                                                
-- stdout --
	* [running-upgrade-267000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig2810646224
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-267000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.423625976.exe start -p running-upgrade-267000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.423625976.exe start -p running-upgrade-267000 --memory=2200 --vm-driver=docker : exit status 70 (4.753647925s)

                                                
                                                
-- stdout --
	* [running-upgrade-267000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1831373121
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-267000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-03-31 11:00:14.468158 -0700 PDT m=+2419.398125602
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-267000
helpers_test.go:235: (dbg) docker inspect running-upgrade-267000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f5f09d8f7ce78eeed7530f498ff5eca286ee852490c79b8c5a31d64ea4a6bf78",
	        "Created": "2023-03-31T17:59:50.00236432Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 174180,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T17:59:50.253786022Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/f5f09d8f7ce78eeed7530f498ff5eca286ee852490c79b8c5a31d64ea4a6bf78/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f5f09d8f7ce78eeed7530f498ff5eca286ee852490c79b8c5a31d64ea4a6bf78/hostname",
	        "HostsPath": "/var/lib/docker/containers/f5f09d8f7ce78eeed7530f498ff5eca286ee852490c79b8c5a31d64ea4a6bf78/hosts",
	        "LogPath": "/var/lib/docker/containers/f5f09d8f7ce78eeed7530f498ff5eca286ee852490c79b8c5a31d64ea4a6bf78/f5f09d8f7ce78eeed7530f498ff5eca286ee852490c79b8c5a31d64ea4a6bf78-json.log",
	        "Name": "/running-upgrade-267000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-267000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/83559a3900ffd35e2ce2c16d8bec3c5c5771d5b21cd8173a05d6d4c0616046a1-init/diff:/var/lib/docker/overlay2/0f1f87dae13e3d6c1e26bf86f2861f8b91ce3789be7c4e92d8b155e5200ab693/diff:/var/lib/docker/overlay2/7f32cee17ad9c12a0f0db2baf7ca9024eedb273edaff0f4e86ef85e5700c84f5/diff:/var/lib/docker/overlay2/8c2d73bbfae80b6f94b7e962ae9854a96c2756a499cc5f64f263202d2497e917/diff:/var/lib/docker/overlay2/2d3aaf75c7cd24910d68b9e9de840dcda56c4d3b4724d7654e592f0f82eb633c/diff:/var/lib/docker/overlay2/58ea865d3f308ac251afe813f9b8886eaa5bfd34b8ec664284a86330e19db754/diff:/var/lib/docker/overlay2/d2299dc2840a2c6a1d6ff1f798df947bfb658aec896b24ed29e79ade04227db3/diff:/var/lib/docker/overlay2/fc4889ff6bbbd1cb558394386d643b61517255f9513b07f52f37a66637d960f2/diff:/var/lib/docker/overlay2/ed74bf189227b916ec42460d91816a91c1e6bf3c7667655cb2a88d0351d81549/diff:/var/lib/docker/overlay2/49482d68f5a4021d3fe4fb4f48411a3d52cdeae16c9d92931249c09954e4852c/diff:/var/lib/docker/overlay2/47f4ed
785727191a64e043e582a7d70b65899b9bbde289387ae3c661f286f90e/diff:/var/lib/docker/overlay2/ceb22616d74f3fb95ac5fca3f50b460c4a56f5156797be123a6ce27fd0c2a67f/diff:/var/lib/docker/overlay2/20e9689c79ca1cdc1688e38143f823a86af04057080a936b0d63c587026c6fe2/diff:/var/lib/docker/overlay2/3058c9134382eea8add3bff563eea094973c4def5d41ce15f932c10a126299a0/diff:/var/lib/docker/overlay2/29a0f131003172b131f3c25e8b88220209add31cbeef9e732c8e20871301efc2/diff:/var/lib/docker/overlay2/5f9292f06310de74dd01224f30ea82aa5bf6752eb3311569fe2eb57c5d1356a7/diff:/var/lib/docker/overlay2/51e19a56fc532e9bb18f1703bdcdd1c12eb6189d90643dbc807bc998d3896acc/diff:/var/lib/docker/overlay2/8711e8773b9ba238c5430e60197a3d7e50172f441405ffc46ae2372d688cf013/diff:/var/lib/docker/overlay2/c4cc9d2a44b270bc08b6071c5cf3b01153b21d8c58b43e092ae3d625ca2dca10/diff:/var/lib/docker/overlay2/ef3653e6a76e1e8038736a87465520c48ded8bb193b276a7686a2b738ec30395/diff:/var/lib/docker/overlay2/23aa646ae9e0cd40cebb809c52cfe2200ed57b7c32e264601e3d6341a630ce11/diff:/var/lib/d
ocker/overlay2/8be1bd5be2b47454afcc3c9311b96adf8427f9a33b09cd26cf0f190ee1775668/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83559a3900ffd35e2ce2c16d8bec3c5c5771d5b21cd8173a05d6d4c0616046a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83559a3900ffd35e2ce2c16d8bec3c5c5771d5b21cd8173a05d6d4c0616046a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83559a3900ffd35e2ce2c16d8bec3c5c5771d5b21cd8173a05d6d4c0616046a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-267000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-267000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-267000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-267000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-267000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e778798c9322240eb803e5a20d88315902efb17ede85783ebc8fa1d692346c5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51616"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51617"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51618"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1e778798c932",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "7260c7c2c45a83cef0aa50fa1477fa25a6dfaafd4ee3684f5c12d4d1cca465bf",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "f23c0e940708fc9a8af0a2f49ad9e9e7316326c4546197ad1784141561b58a8f",
	                    "EndpointID": "7260c7c2c45a83cef0aa50fa1477fa25a6dfaafd4ee3684f5c12d4d1cca465bf",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-267000 -n running-upgrade-267000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-267000 -n running-upgrade-267000: exit status 6 (389.7551ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:00:14.909259   13148 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-267000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-267000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-267000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-267000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-267000: (2.362916521s)
--- FAIL: TestRunningBinaryUpgrade (76.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (392.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m22.946644592s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-101000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-101000 in cluster kubernetes-upgrade-101000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 11:01:25.379619   13544 out.go:296] Setting OutFile to fd 1 ...
	I0331 11:01:25.379788   13544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:01:25.379795   13544 out.go:309] Setting ErrFile to fd 2...
	I0331 11:01:25.379799   13544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:01:25.379914   13544 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 11:01:25.381502   13544 out.go:303] Setting JSON to false
	I0331 11:01:25.402904   13544 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3653,"bootTime":1680282032,"procs":383,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 11:01:25.403000   13544 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 11:01:25.440337   13544 out.go:177] * [kubernetes-upgrade-101000] minikube v1.29.0 on Darwin 13.3
	I0331 11:01:25.482324   13544 notify.go:220] Checking for updates...
	I0331 11:01:25.482338   13544 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 11:01:25.503456   13544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:01:25.524479   13544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 11:01:25.598183   13544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 11:01:25.619705   13544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 11:01:25.642179   13544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 11:01:25.664095   13544 config.go:182] Loaded profile config "cert-expiration-298000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 11:01:25.664198   13544 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 11:01:25.730034   13544 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 11:01:25.730172   13544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:01:25.917701   13544 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-31 18:01:25.783925128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:01:25.939491   13544 out.go:177] * Using the docker driver based on user configuration
	I0331 11:01:25.976193   13544 start.go:295] selected driver: docker
	I0331 11:01:25.976222   13544 start.go:859] validating driver "docker" against <nil>
	I0331 11:01:25.976249   13544 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 11:01:25.979623   13544 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:01:26.166783   13544 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-31 18:01:26.033708446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:01:26.166892   13544 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0331 11:01:26.167078   13544 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0331 11:01:26.189017   13544 out.go:177] * Using Docker Desktop driver with root privileges
	I0331 11:01:26.209957   13544 cni.go:84] Creating CNI manager for ""
	I0331 11:01:26.209993   13544 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 11:01:26.210009   13544 start_flags.go:319] config:
	{Name:kubernetes-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:01:26.253886   13544 out.go:177] * Starting control plane node kubernetes-upgrade-101000 in cluster kubernetes-upgrade-101000
	I0331 11:01:26.274768   13544 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 11:01:26.295846   13544 out.go:177] * Pulling base image ...
	I0331 11:01:26.337613   13544 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 11:01:26.337655   13544 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 11:01:26.337776   13544 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0331 11:01:26.337796   13544 cache.go:57] Caching tarball of preloaded images
	I0331 11:01:26.338033   13544 preload.go:174] Found /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0331 11:01:26.338055   13544 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0331 11:01:26.339046   13544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/config.json ...
	I0331 11:01:26.339155   13544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/config.json: {Name:mk38d80c5aff069b46f21ba794e69254f8009ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:01:26.397346   13544 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon, skipping pull
	I0331 11:01:26.397364   13544 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in daemon, skipping load
	I0331 11:01:26.397386   13544 cache.go:193] Successfully downloaded all kic artifacts
	I0331 11:01:26.397438   13544 start.go:364] acquiring machines lock for kubernetes-upgrade-101000: {Name:mk23521e804e4230275443fd009670bc05d32947 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 11:01:26.397592   13544 start.go:368] acquired machines lock for "kubernetes-upgrade-101000" in 141.866µs
	I0331 11:01:26.397628   13544 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-101000 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 11:01:26.397700   13544 start.go:125] createHost starting for "" (driver="docker")
	I0331 11:01:26.440097   13544 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0331 11:01:26.440382   13544 start.go:159] libmachine.API.Create for "kubernetes-upgrade-101000" (driver="docker")
	I0331 11:01:26.440405   13544 client.go:168] LocalClient.Create starting
	I0331 11:01:26.440513   13544 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem
	I0331 11:01:26.440558   13544 main.go:141] libmachine: Decoding PEM data...
	I0331 11:01:26.440581   13544 main.go:141] libmachine: Parsing certificate...
	I0331 11:01:26.440630   13544 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem
	I0331 11:01:26.440663   13544 main.go:141] libmachine: Decoding PEM data...
	I0331 11:01:26.440674   13544 main.go:141] libmachine: Parsing certificate...
	I0331 11:01:26.441218   13544 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-101000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0331 11:01:26.499374   13544 cli_runner.go:211] docker network inspect kubernetes-upgrade-101000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0331 11:01:26.499474   13544 network_create.go:281] running [docker network inspect kubernetes-upgrade-101000] to gather additional debugging logs...
	I0331 11:01:26.499500   13544 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-101000
	W0331 11:01:26.556442   13544 cli_runner.go:211] docker network inspect kubernetes-upgrade-101000 returned with exit code 1
	I0331 11:01:26.556467   13544 network_create.go:284] error running [docker network inspect kubernetes-upgrade-101000]: docker network inspect kubernetes-upgrade-101000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-101000
	I0331 11:01:26.556482   13544 network_create.go:286] output of [docker network inspect kubernetes-upgrade-101000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-101000
	
	** /stderr **
	I0331 11:01:26.556569   13544 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0331 11:01:26.615284   13544 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0331 11:01:26.615632   13544 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fde900}
	I0331 11:01:26.615649   13544 network_create.go:123] attempt to create docker network kubernetes-upgrade-101000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0331 11:01:26.615715   13544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-101000 kubernetes-upgrade-101000
	W0331 11:01:26.672405   13544 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-101000 kubernetes-upgrade-101000 returned with exit code 1
	W0331 11:01:26.672454   13544 network_create.go:148] failed to create docker network kubernetes-upgrade-101000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-101000 kubernetes-upgrade-101000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0331 11:01:26.672469   13544 network_create.go:115] failed to create docker network kubernetes-upgrade-101000 192.168.58.0/24, will retry: subnet is taken
	I0331 11:01:26.674010   13544 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0331 11:01:26.674325   13544 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fdf740}
	I0331 11:01:26.674342   13544 network_create.go:123] attempt to create docker network kubernetes-upgrade-101000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0331 11:01:26.674404   13544 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-101000 kubernetes-upgrade-101000
	I0331 11:01:26.764845   13544 network_create.go:107] docker network kubernetes-upgrade-101000 192.168.67.0/24 created
	I0331 11:01:26.764883   13544 kic.go:117] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-101000" container
	I0331 11:01:26.765001   13544 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0331 11:01:26.823708   13544 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-101000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-101000 --label created_by.minikube.sigs.k8s.io=true
	I0331 11:01:26.881529   13544 oci.go:103] Successfully created a docker volume kubernetes-upgrade-101000
	I0331 11:01:26.881640   13544 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-101000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-101000 --entrypoint /usr/bin/test -v kubernetes-upgrade-101000:/var gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -d /var/lib
	I0331 11:01:27.503611   13544 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-101000
	I0331 11:01:27.503647   13544 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 11:01:27.503660   13544 kic.go:190] Starting extracting preloaded images to volume ...
	I0331 11:01:27.503752   13544 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-101000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -I lz4 -xf /preloaded.tar -C /extractDir
	I0331 11:01:33.565725   13544 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-101000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -I lz4 -xf /preloaded.tar -C /extractDir: (6.062154404s)
	I0331 11:01:33.565757   13544 kic.go:199] duration metric: took 6.062398 seconds to extract preloaded images to volume
	I0331 11:01:33.565890   13544 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0331 11:01:33.754503   13544 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-101000 --name kubernetes-upgrade-101000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-101000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-101000 --network kubernetes-upgrade-101000 --ip 192.168.67.2 --volume kubernetes-upgrade-101000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55
	I0331 11:01:34.143277   13544 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-101000 --format={{.State.Running}}
	I0331 11:01:34.233688   13544 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-101000 --format={{.State.Status}}
	I0331 11:01:34.306321   13544 cli_runner.go:164] Run: docker exec kubernetes-upgrade-101000 stat /var/lib/dpkg/alternatives/iptables
	I0331 11:01:34.445290   13544 oci.go:144] the created container "kubernetes-upgrade-101000" has a running status.
	I0331 11:01:34.445358   13544 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa...
	I0331 11:01:34.564196   13544 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0331 11:01:34.745331   13544 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-101000 --format={{.State.Status}}
	I0331 11:01:34.808989   13544 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0331 11:01:34.809010   13544 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-101000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0331 11:01:34.935975   13544 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-101000 --format={{.State.Status}}
	I0331 11:01:35.002345   13544 machine.go:88] provisioning docker machine ...
	I0331 11:01:35.002381   13544 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-101000"
	I0331 11:01:35.002488   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:35.065474   13544 main.go:141] libmachine: Using SSH client type: native
	I0331 11:01:35.065923   13544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 51745 <nil> <nil>}
	I0331 11:01:35.065937   13544 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-101000 && echo "kubernetes-upgrade-101000" | sudo tee /etc/hostname
	I0331 11:01:35.208719   13544 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-101000
	
	I0331 11:01:35.208809   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:35.269631   13544 main.go:141] libmachine: Using SSH client type: native
	I0331 11:01:35.269977   13544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 51745 <nil> <nil>}
	I0331 11:01:35.269993   13544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 11:01:35.401577   13544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:01:35.403987   13544 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16144-2324/.minikube CaCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16144-2324/.minikube}
	I0331 11:01:35.404031   13544 ubuntu.go:177] setting up certificates
	I0331 11:01:35.404042   13544 provision.go:83] configureAuth start
	I0331 11:01:35.404129   13544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-101000
	I0331 11:01:35.465056   13544 provision.go:138] copyHostCerts
	I0331 11:01:35.465157   13544 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem, removing ...
	I0331 11:01:35.465167   13544 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 11:01:35.465288   13544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem (1078 bytes)
	I0331 11:01:35.465491   13544 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem, removing ...
	I0331 11:01:35.465497   13544 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 11:01:35.465565   13544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem (1123 bytes)
	I0331 11:01:35.465708   13544 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem, removing ...
	I0331 11:01:35.465714   13544 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 11:01:35.465773   13544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem (1679 bytes)
	I0331 11:01:35.465892   13544 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-101000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-101000]
	I0331 11:01:35.625655   13544 provision.go:172] copyRemoteCerts
	I0331 11:01:35.625719   13544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 11:01:35.625777   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:35.694392   13544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51745 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:01:35.790365   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 11:01:35.807810   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0331 11:01:35.825499   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0331 11:01:35.843477   13544 provision.go:86] duration metric: configureAuth took 439.441343ms
	I0331 11:01:35.843495   13544 ubuntu.go:193] setting minikube options for container-runtime
	I0331 11:01:35.843647   13544 config.go:182] Loaded profile config "kubernetes-upgrade-101000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0331 11:01:35.843726   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:35.912495   13544 main.go:141] libmachine: Using SSH client type: native
	I0331 11:01:35.912848   13544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 51745 <nil> <nil>}
	I0331 11:01:35.912864   13544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 11:01:36.052455   13544 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0331 11:01:36.052472   13544 ubuntu.go:71] root file system type: overlay
	I0331 11:01:36.052610   13544 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 11:01:36.052695   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:36.117113   13544 main.go:141] libmachine: Using SSH client type: native
	I0331 11:01:36.117479   13544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 51745 <nil> <nil>}
	I0331 11:01:36.117528   13544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 11:01:36.260271   13544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 11:01:36.260368   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:36.321240   13544 main.go:141] libmachine: Using SSH client type: native
	I0331 11:01:36.321589   13544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 51745 <nil> <nil>}
	I0331 11:01:36.321603   13544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 11:01:36.942940   13544 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-03-27 16:16:18.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:01:36.258234932 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0331 11:01:36.942964   13544 machine.go:91] provisioned docker machine in 1.940697036s
	I0331 11:01:36.942970   13544 client.go:171] LocalClient.Create took 10.503086278s
	I0331 11:01:36.942992   13544 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-101000" took 10.503133757s
	I0331 11:01:36.943003   13544 start.go:300] post-start starting for "kubernetes-upgrade-101000" (driver="docker")
	I0331 11:01:36.943009   13544 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 11:01:36.943079   13544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 11:01:36.943130   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:37.006236   13544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51745 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:01:37.104113   13544 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 11:01:37.107859   13544 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0331 11:01:37.107879   13544 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0331 11:01:37.107886   13544 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0331 11:01:37.107895   13544 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0331 11:01:37.107903   13544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/addons for local assets ...
	I0331 11:01:37.107988   13544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/files for local assets ...
	I0331 11:01:37.108150   13544 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> 28002.pem in /etc/ssl/certs
	I0331 11:01:37.108308   13544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 11:01:37.116897   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:01:37.140203   13544 start.go:303] post-start completed in 197.19898ms
	I0331 11:01:37.140724   13544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-101000
	I0331 11:01:37.206594   13544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/config.json ...
	I0331 11:01:37.207031   13544 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 11:01:37.207089   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:37.273423   13544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51745 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:01:37.368062   13544 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0331 11:01:37.373123   13544 start.go:128] duration metric: createHost completed in 10.975963446s
	I0331 11:01:37.373141   13544 start.go:83] releasing machines lock for "kubernetes-upgrade-101000", held for 10.976090155s
	I0331 11:01:37.373238   13544 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-101000
	I0331 11:01:37.437319   13544 ssh_runner.go:195] Run: cat /version.json
	I0331 11:01:37.437326   13544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 11:01:37.437411   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:37.437420   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:37.508886   13544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51745 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:01:37.508965   13544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51745 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	W0331 11:01:37.653813   13544 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	I0331 11:01:37.653892   13544 ssh_runner.go:195] Run: systemctl --version
	I0331 11:01:37.659678   13544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0331 11:01:37.665199   13544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0331 11:01:37.687812   13544 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0331 11:01:37.687915   13544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0331 11:01:37.703003   13544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0331 11:01:37.711008   13544 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0331 11:01:37.711026   13544 start.go:481] detecting cgroup driver to use...
	I0331 11:01:37.711037   13544 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:01:37.711108   13544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:01:37.724877   13544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0331 11:01:37.734189   13544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 11:01:37.743190   13544 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 11:01:37.743255   13544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 11:01:37.752331   13544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:01:37.762392   13544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 11:01:37.774719   13544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:01:37.784954   13544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 11:01:37.793273   13544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 11:01:37.802212   13544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 11:01:37.809753   13544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 11:01:37.817603   13544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:01:37.888961   13544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 11:01:37.962637   13544 start.go:481] detecting cgroup driver to use...
	I0331 11:01:37.962657   13544 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:01:37.962717   13544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 11:01:37.974327   13544 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0331 11:01:37.974423   13544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 11:01:37.986631   13544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:01:38.002544   13544 ssh_runner.go:195] Run: which cri-dockerd
	I0331 11:01:38.007228   13544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 11:01:38.022011   13544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 11:01:38.039328   13544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 11:01:38.141466   13544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 11:01:38.217614   13544 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 11:01:38.217630   13544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 11:01:38.244587   13544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:01:38.320760   13544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:01:38.667140   13544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:01:38.693992   13544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:01:38.742954   13544 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.2 ...
	I0331 11:01:38.743085   13544 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-101000 dig +short host.docker.internal
	I0331 11:01:38.863758   13544 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0331 11:01:38.863903   13544 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0331 11:01:38.870420   13544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:01:38.882471   13544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:01:38.944312   13544 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 11:01:38.944409   13544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:01:38.966105   13544 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0331 11:01:38.966122   13544 docker.go:645] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0331 11:01:38.966189   13544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 11:01:38.974347   13544 ssh_runner.go:195] Run: which lz4
	I0331 11:01:38.978483   13544 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0331 11:01:38.982478   13544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0331 11:01:38.982509   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0331 11:01:44.292350   13544 docker.go:603] Took 5.314201 seconds to copy over tarball
	I0331 11:01:44.292419   13544 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0331 11:01:46.460761   13544 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.168428392s)
	I0331 11:01:46.460777   13544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0331 11:01:46.529705   13544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 11:01:46.537841   13544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0331 11:01:46.550733   13544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:01:46.615212   13544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:01:47.474225   13544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:01:47.495171   13544 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0331 11:01:47.495185   13544 docker.go:645] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0331 11:01:47.495195   13544 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0331 11:01:47.504563   13544 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:01:47.505475   13544 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:01:47.507226   13544 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:01:47.507938   13544 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0331 11:01:47.508733   13544 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0331 11:01:47.509570   13544 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:01:47.510047   13544 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:01:47.512092   13544 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:01:47.518585   13544 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:01:47.518776   13544 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error: No such image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:01:47.521950   13544 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error: No such image: registry.k8s.io/pause:3.1
	I0331 11:01:47.523316   13544 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error: No such image: registry.k8s.io/coredns:1.6.2
	I0331 11:01:47.523458   13544 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:01:47.525813   13544 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:01:47.526483   13544 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:01:47.526679   13544 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:01:48.638275   13544 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0331 11:01:48.661024   13544 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0331 11:01:48.661084   13544 docker.go:313] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:01:48.661220   13544 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0331 11:01:48.686147   13544 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0331 11:01:48.818278   13544 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:01:48.845467   13544 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0331 11:01:48.845496   13544 docker.go:313] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:01:48.845560   13544 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:01:48.874144   13544 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0331 11:01:48.994414   13544 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0331 11:01:49.019110   13544 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0331 11:01:49.019139   13544 docker.go:313] Removing image: registry.k8s.io/pause:3.1
	I0331 11:01:49.019198   13544 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0331 11:01:49.043132   13544 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0331 11:01:49.104906   13544 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0331 11:01:49.129295   13544 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0331 11:01:49.129332   13544 docker.go:313] Removing image: registry.k8s.io/coredns:1.6.2
	I0331 11:01:49.129410   13544 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0331 11:01:49.155153   13544 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0331 11:01:49.400736   13544 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:01:49.424667   13544 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0331 11:01:49.424694   13544 docker.go:313] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:01:49.424756   13544 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:01:49.448861   13544 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0331 11:01:49.717903   13544 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:01:49.739033   13544 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0331 11:01:49.739062   13544 docker.go:313] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:01:49.739122   13544 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:01:49.761597   13544 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0331 11:01:50.379448   13544 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:01:50.422603   13544 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0331 11:01:50.422627   13544 docker.go:313] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:01:50.422699   13544 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:01:50.442772   13544 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0331 11:01:50.541964   13544 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:01:50.565244   13544 cache_images.go:92] LoadImages completed in 3.070190364s
	W0331 11:01:50.565349   13544 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0331 11:01:50.565443   13544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 11:01:50.617351   13544 cni.go:84] Creating CNI manager for ""
	I0331 11:01:50.617366   13544 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 11:01:50.617382   13544 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 11:01:50.617398   13544 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-101000 NodeName:kubernetes-upgrade-101000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 11:01:50.617501   13544 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-101000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-101000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 11:01:50.617559   13544 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 11:01:50.617603   13544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0331 11:01:50.626193   13544 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 11:01:50.626252   13544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 11:01:50.634754   13544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0331 11:01:50.648738   13544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0331 11:01:50.662761   13544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0331 11:01:50.677641   13544 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0331 11:01:50.682651   13544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:01:50.693586   13544 certs.go:56] Setting up /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000 for IP: 192.168.67.2
	I0331 11:01:50.693608   13544 certs.go:186] acquiring lock for shared ca certs: {Name:mk1ddc355573fb6044e73c93dd0e9bf4bae32052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:01:50.693785   13544 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key
	I0331 11:01:50.693843   13544 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key
	I0331 11:01:50.693885   13544 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.key
	I0331 11:01:50.693897   13544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.crt with IP's: []
	I0331 11:01:50.845041   13544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.crt ...
	I0331 11:01:50.845052   13544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.crt: {Name:mkc106adc4737527ff84723e033ed7dfc1aad4f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:01:50.848856   13544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.key ...
	I0331 11:01:50.848867   13544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.key: {Name:mka04a946b2e94af31ce3dbfbca56ab978ed3f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:01:50.849076   13544 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.key.c7fa3a9e
	I0331 11:01:50.849097   13544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0331 11:01:51.055336   13544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.crt.c7fa3a9e ...
	I0331 11:01:51.055357   13544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.crt.c7fa3a9e: {Name:mkf2b96d78c4ce0ae15df8d077404485444a552b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:01:51.055650   13544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.key.c7fa3a9e ...
	I0331 11:01:51.055659   13544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.key.c7fa3a9e: {Name:mke551905b67cd63e45641c65c033740509a4bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:01:51.055834   13544 certs.go:333] copying /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.crt
	I0331 11:01:51.055984   13544 certs.go:337] copying /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.key
	I0331 11:01:51.056126   13544 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.key
	I0331 11:01:51.056141   13544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.crt with IP's: []
	I0331 11:01:51.336730   13544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.crt ...
	I0331 11:01:51.336742   13544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.crt: {Name:mk67eaed0dca4f0c84147b6749fc68384ff0e204 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:01:51.337141   13544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.key ...
	I0331 11:01:51.337150   13544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.key: {Name:mk4603ad56a67c573f3d7566bb006582a14b4208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:01:51.358701   13544 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem (1338 bytes)
	W0331 11:01:51.358754   13544 certs.go:397] ignoring /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800_empty.pem, impossibly tiny 0 bytes
	I0331 11:01:51.358773   13544 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem (1679 bytes)
	I0331 11:01:51.358814   13544 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem (1078 bytes)
	I0331 11:01:51.358874   13544 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem (1123 bytes)
	I0331 11:01:51.358915   13544 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem (1679 bytes)
	I0331 11:01:51.359000   13544 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:01:51.359494   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 11:01:51.379173   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0331 11:01:51.396957   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 11:01:51.414756   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 11:01:51.432473   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 11:01:51.450172   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0331 11:01:51.467748   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 11:01:51.486626   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 11:01:51.504748   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /usr/share/ca-certificates/28002.pem (1708 bytes)
	I0331 11:01:51.522683   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 11:01:51.540587   13544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem --> /usr/share/ca-certificates/2800.pem (1338 bytes)
	I0331 11:01:51.558703   13544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 11:01:51.572413   13544 ssh_runner.go:195] Run: openssl version
	I0331 11:01:51.579059   13544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28002.pem && ln -fs /usr/share/ca-certificates/28002.pem /etc/ssl/certs/28002.pem"
	I0331 11:01:51.588189   13544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28002.pem
	I0331 11:01:51.592374   13544 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/28002.pem
	I0331 11:01:51.592436   13544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28002.pem
	I0331 11:01:51.598223   13544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28002.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 11:01:51.606686   13544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 11:01:51.615338   13544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:01:51.619535   13544 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:01:51.619587   13544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:01:51.625263   13544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 11:01:51.633721   13544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2800.pem && ln -fs /usr/share/ca-certificates/2800.pem /etc/ssl/certs/2800.pem"
	I0331 11:01:51.642165   13544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2800.pem
	I0331 11:01:51.646468   13544 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/2800.pem
	I0331 11:01:51.646529   13544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2800.pem
	I0331 11:01:51.652532   13544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2800.pem /etc/ssl/certs/51391683.0"
	I0331 11:01:51.661020   13544 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:01:51.661127   13544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:01:51.681774   13544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 11:01:51.689979   13544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:01:51.697947   13544 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:01:51.698010   13544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:01:51.705931   13544 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:01:51.705960   13544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:01:51.754074   13544 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0331 11:01:51.754134   13544 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:01:51.932410   13544 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:01:51.932508   13544 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:01:51.932599   13544 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:01:52.092581   13544 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:01:52.094236   13544 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:01:52.100848   13544 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0331 11:01:52.168395   13544 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:01:52.190010   13544 out.go:204]   - Generating certificates and keys ...
	I0331 11:01:52.190095   13544 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:01:52.190151   13544 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:01:52.291363   13544 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0331 11:01:52.584082   13544 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0331 11:01:52.827375   13544 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0331 11:01:53.021861   13544 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0331 11:01:53.193587   13544 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0331 11:01:53.193727   13544 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-101000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0331 11:01:53.251971   13544 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0331 11:01:53.252088   13544 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-101000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0331 11:01:53.302742   13544 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0331 11:01:53.421332   13544 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0331 11:01:53.523671   13544 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0331 11:01:53.523728   13544 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:01:53.636656   13544 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:01:53.781409   13544 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:01:53.904509   13544 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:01:54.019665   13544 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:01:54.020228   13544 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:01:54.041607   13544 out.go:204]   - Booting up control plane ...
	I0331 11:01:54.041692   13544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:01:54.041769   13544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:01:54.041830   13544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:01:54.041895   13544 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:01:54.042026   13544 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:02:34.028647   13544 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 11:02:34.029350   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:02:34.029583   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:02:39.029753   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:02:39.029925   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:02:49.029945   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:02:49.030116   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:03:09.029872   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:03:09.030057   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:03:49.028622   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:03:49.028806   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:03:49.029070   13544 kubeadm.go:322] 
	I0331 11:03:49.029131   13544 kubeadm.go:322] Unfortunately, an error has occurred:
	I0331 11:03:49.029173   13544 kubeadm.go:322] 	timed out waiting for the condition
	I0331 11:03:49.029182   13544 kubeadm.go:322] 
	I0331 11:03:49.029212   13544 kubeadm.go:322] This error is likely caused by:
	I0331 11:03:49.029235   13544 kubeadm.go:322] 	- The kubelet is not running
	I0331 11:03:49.029347   13544 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 11:03:49.029357   13544 kubeadm.go:322] 
	I0331 11:03:49.029431   13544 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 11:03:49.029457   13544 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0331 11:03:49.029481   13544 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0331 11:03:49.029489   13544 kubeadm.go:322] 
	I0331 11:03:49.029580   13544 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 11:03:49.029659   13544 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0331 11:03:49.029725   13544 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0331 11:03:49.029764   13544 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0331 11:03:49.029817   13544 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0331 11:03:49.029846   13544 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0331 11:03:49.032752   13544 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 11:03:49.032823   13544 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 11:03:49.032930   13544 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
	I0331 11:03:49.033017   13544 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:03:49.033103   13544 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 11:03:49.033166   13544 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0331 11:03:49.033302   13544 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-101000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-101000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-101000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-101000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0331 11:03:49.033334   13544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0331 11:03:49.456602   13544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:03:49.467084   13544 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:03:49.467142   13544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:03:49.475036   13544 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:03:49.475064   13544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:03:49.522931   13544 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0331 11:03:49.522988   13544 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:03:49.690729   13544 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:03:49.690825   13544 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:03:49.690909   13544 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:03:49.847501   13544 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:03:49.848460   13544 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:03:49.855326   13544 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0331 11:03:49.926992   13544 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:03:49.950336   13544 out.go:204]   - Generating certificates and keys ...
	I0331 11:03:49.950425   13544 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:03:49.950497   13544 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:03:49.950579   13544 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0331 11:03:49.950642   13544 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0331 11:03:49.950698   13544 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0331 11:03:49.950762   13544 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0331 11:03:49.950814   13544 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0331 11:03:49.950882   13544 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0331 11:03:49.951000   13544 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0331 11:03:49.951065   13544 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0331 11:03:49.951102   13544 kubeadm.go:322] [certs] Using the existing "sa" key
	I0331 11:03:49.951169   13544 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:03:50.157064   13544 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:03:50.284265   13544 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:03:50.557632   13544 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:03:50.650538   13544 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:03:50.651130   13544 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:03:50.672798   13544 out.go:204]   - Booting up control plane ...
	I0331 11:03:50.672925   13544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:03:50.673017   13544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:03:50.673104   13544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:03:50.673196   13544 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:03:50.673442   13544 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:04:30.658177   13544 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 11:04:30.659288   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:04:30.659521   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:04:35.659945   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:04:35.660119   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:04:45.660385   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:04:45.660542   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:05:05.660509   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:05:05.660715   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:05:45.659791   13544 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:05:45.659976   13544 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:05:45.659986   13544 kubeadm.go:322] 
	I0331 11:05:45.660028   13544 kubeadm.go:322] Unfortunately, an error has occurred:
	I0331 11:05:45.660069   13544 kubeadm.go:322] 	timed out waiting for the condition
	I0331 11:05:45.660074   13544 kubeadm.go:322] 
	I0331 11:05:45.660100   13544 kubeadm.go:322] This error is likely caused by:
	I0331 11:05:45.660143   13544 kubeadm.go:322] 	- The kubelet is not running
	I0331 11:05:45.660256   13544 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 11:05:45.660269   13544 kubeadm.go:322] 
	I0331 11:05:45.660365   13544 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 11:05:45.660402   13544 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0331 11:05:45.660436   13544 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0331 11:05:45.660442   13544 kubeadm.go:322] 
	I0331 11:05:45.660534   13544 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 11:05:45.660624   13544 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0331 11:05:45.660718   13544 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0331 11:05:45.660759   13544 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0331 11:05:45.660818   13544 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0331 11:05:45.660845   13544 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0331 11:05:45.663054   13544 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 11:05:45.663126   13544 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 11:05:45.663229   13544 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
	I0331 11:05:45.663320   13544 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:05:45.663416   13544 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 11:05:45.663489   13544 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0331 11:05:45.663530   13544 kubeadm.go:403] StartCluster complete in 3m54.01421829s
	I0331 11:05:45.663636   13544 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:05:45.683999   13544 logs.go:277] 0 containers: []
	W0331 11:05:45.684012   13544 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:05:45.684081   13544 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:05:45.709101   13544 logs.go:277] 0 containers: []
	W0331 11:05:45.709132   13544 logs.go:279] No container was found matching "etcd"
	I0331 11:05:45.709229   13544 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:05:45.739283   13544 logs.go:277] 0 containers: []
	W0331 11:05:45.739296   13544 logs.go:279] No container was found matching "coredns"
	I0331 11:05:45.739367   13544 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:05:45.763867   13544 logs.go:277] 0 containers: []
	W0331 11:05:45.763882   13544 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:05:45.763957   13544 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:05:45.794141   13544 logs.go:277] 0 containers: []
	W0331 11:05:45.794155   13544 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:05:45.794232   13544 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:05:45.817223   13544 logs.go:277] 0 containers: []
	W0331 11:05:45.817238   13544 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:05:45.817314   13544 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:05:45.844726   13544 logs.go:277] 0 containers: []
	W0331 11:05:45.844744   13544 logs.go:279] No container was found matching "kindnet"
	I0331 11:05:45.844753   13544 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:05:45.844782   13544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:05:45.921968   13544 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:05:45.921984   13544 logs.go:123] Gathering logs for Docker ...
	I0331 11:05:45.921994   13544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:05:45.960268   13544 logs.go:123] Gathering logs for container status ...
	I0331 11:05:45.960299   13544 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:05:48.018200   13544 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057974123s)
	I0331 11:05:48.018335   13544 logs.go:123] Gathering logs for kubelet ...
	I0331 11:05:48.018344   13544 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:05:48.064741   13544 logs.go:123] Gathering logs for dmesg ...
	I0331 11:05:48.064765   13544 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0331 11:05:48.083011   13544 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0331 11:05:48.083033   13544 out.go:239] * 
	* 
	W0331 11:05:48.083136   13544 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 11:05:48.083151   13544 out.go:239] * 
	* 
	W0331 11:05:48.083764   13544 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0331 11:05:48.145362   13544 out.go:177] 
	W0331 11:05:48.187505   13544 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 11:05:48.187582   13544 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0331 11:05:48.187611   13544 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0331 11:05:48.208378   13544 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-101000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-101000: (1.743814125s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-101000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-101000 status --format={{.Host}}: exit status 7 (126.327393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=docker 
E0331 11:06:19.593684    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=docker : (1m41.792138219s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-101000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (459.869339ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-101000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-101000
	    minikube start -p kubernetes-upgrade-101000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1010002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-101000 --kubernetes-version=v1.27.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=docker 
E0331 11:07:33.587456    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-101000 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=docker : (19.053434816s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-03-31 11:07:51.544281 -0700 PDT m=+2876.497103778
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-101000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-101000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "54c2b3052dd3c8c7cfc6f0f41a402ecc9cfff1e0379709015ca612ce9fb71d58",
	        "Created": "2023-03-31T18:01:33.82409013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205667,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:05:52.016295056Z",
	            "FinishedAt": "2023-03-31T18:05:48.90071693Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/54c2b3052dd3c8c7cfc6f0f41a402ecc9cfff1e0379709015ca612ce9fb71d58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/54c2b3052dd3c8c7cfc6f0f41a402ecc9cfff1e0379709015ca612ce9fb71d58/hostname",
	        "HostsPath": "/var/lib/docker/containers/54c2b3052dd3c8c7cfc6f0f41a402ecc9cfff1e0379709015ca612ce9fb71d58/hosts",
	        "LogPath": "/var/lib/docker/containers/54c2b3052dd3c8c7cfc6f0f41a402ecc9cfff1e0379709015ca612ce9fb71d58/54c2b3052dd3c8c7cfc6f0f41a402ecc9cfff1e0379709015ca612ce9fb71d58-json.log",
	        "Name": "/kubernetes-upgrade-101000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-101000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-101000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/968a91dc84162a1f3d37040f382c2a8f3eb7ec74b08580a01aa5841c4c1f163f-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/968a91dc84162a1f3d37040f382c2a8f3eb7ec74b08580a01aa5841c4c1f163f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/968a91dc84162a1f3d37040f382c2a8f3eb7ec74b08580a01aa5841c4c1f163f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/968a91dc84162a1f3d37040f382c2a8f3eb7ec74b08580a01aa5841c4c1f163f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-101000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-101000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-101000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-101000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-101000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dceb0169c93c45b1dc0bf923b70a1967d54372e51273fca812741dd65ac42bdc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52012"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52013"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52014"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52015"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52016"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dceb0169c93c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-101000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "54c2b3052dd3",
	                        "kubernetes-upgrade-101000"
	                    ],
	                    "NetworkID": "c8b4ab17eb0e9d71a8b71e72d5cfb6e5fd9634a1431a2f9c1d3150ed8831beda",
	                    "EndpointID": "fcf0408ec09fce5cf709b91fa726bb2b9e153b6e7421c8dec4d3a2dbf1994563",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-101000 -n kubernetes-upgrade-101000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-101000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-101000 logs -n 25: (2.495176625s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-346000 sudo journalctl                       | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | -xeu kubelet --all --full                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo cat                              | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo cat                              | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo systemctl                        | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo systemctl                        | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo cat                              | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo docker                           | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo systemctl                        | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo systemctl                        | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo cat                              | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo cat                              | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo                                  | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo systemctl                        | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo systemctl                        | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo cat                              | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:06 PDT | 31 Mar 23 11:06 PDT |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo cat                              | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT | 31 Mar 23 11:07 PDT |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo containerd                       | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT | 31 Mar 23 11:07 PDT |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo systemctl                        | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT |                     |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo systemctl                        | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT | 31 Mar 23 11:07 PDT |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo find                             | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT | 31 Mar 23 11:07 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-346000 sudo crio                             | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT | 31 Mar 23 11:07 PDT |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-346000                                       | auto-346000               | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT | 31 Mar 23 11:07 PDT |
	| start   | -p kindnet-346000                                    | kindnet-346000            | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker                        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-101000                         | kubernetes-upgrade-101000 | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-101000                         | kubernetes-upgrade-101000 | jenkins | v1.29.0 | 31 Mar 23 11:07 PDT | 31 Mar 23 11:07 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.0-rc.0                    |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 11:07:32
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 11:07:32.531623   15937 out.go:296] Setting OutFile to fd 1 ...
	I0331 11:07:32.531803   15937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:07:32.531808   15937 out.go:309] Setting ErrFile to fd 2...
	I0331 11:07:32.531812   15937 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:07:32.531926   15937 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 11:07:32.533284   15937 out.go:303] Setting JSON to false
	I0331 11:07:32.553443   15937 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4020,"bootTime":1680282032,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 11:07:32.553526   15937 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 11:07:32.574712   15937 out.go:177] * [kubernetes-upgrade-101000] minikube v1.29.0 on Darwin 13.3
	I0331 11:07:32.611774   15937 notify.go:220] Checking for updates...
	I0331 11:07:32.649647   15937 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 11:07:32.670956   15937 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:07:32.692936   15937 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 11:07:32.714540   15937 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 11:07:32.735663   15937 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 11:07:32.756710   15937 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 11:07:32.778135   15937 config.go:182] Loaded profile config "kubernetes-upgrade-101000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0331 11:07:32.778782   15937 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 11:07:32.843961   15937 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 11:07:32.844082   15937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:07:33.034731   15937 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:60 SystemTime:2023-03-31 18:07:32.896312956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:07:33.111309   15937 out.go:177] * Using the docker driver based on existing profile
	I0331 11:07:33.134387   15937 start.go:295] selected driver: docker
	I0331 11:07:33.134414   15937 start.go:859] validating driver "docker" against &{Name:kubernetes-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:kubernetes-upgrade-101000 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:07:33.134571   15937 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 11:07:33.139780   15937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:07:33.327594   15937 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:false NGoroutines:60 SystemTime:2023-03-31 18:07:33.193168206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:07:33.327760   15937 cni.go:84] Creating CNI manager for ""
	I0331 11:07:33.327777   15937 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:07:33.327793   15937 start_flags.go:319] config:
	{Name:kubernetes-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:kubernetes-upgrade-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:07:33.350291   15937 out.go:177] * Starting control plane node kubernetes-upgrade-101000 in cluster kubernetes-upgrade-101000
	I0331 11:07:33.376470   15937 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 11:07:33.397100   15937 out.go:177] * Pulling base image ...
	I0331 11:07:33.439287   15937 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 11:07:33.439340   15937 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 11:07:33.439372   15937 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0331 11:07:33.439385   15937 cache.go:57] Caching tarball of preloaded images
	I0331 11:07:33.439542   15937 preload.go:174] Found /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0331 11:07:33.439558   15937 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.0-rc.0 on docker
	I0331 11:07:33.440189   15937 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/config.json ...
	I0331 11:07:33.499746   15937 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon, skipping pull
	I0331 11:07:33.499763   15937 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in daemon, skipping load
	I0331 11:07:33.499789   15937 cache.go:193] Successfully downloaded all kic artifacts
	I0331 11:07:33.499823   15937 start.go:364] acquiring machines lock for kubernetes-upgrade-101000: {Name:mk23521e804e4230275443fd009670bc05d32947 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 11:07:33.499909   15937 start.go:368] acquired machines lock for "kubernetes-upgrade-101000" in 69.223µs
	I0331 11:07:33.499934   15937 start.go:96] Skipping create...Using existing machine configuration
	I0331 11:07:33.499941   15937 fix.go:55] fixHost starting: 
	I0331 11:07:33.500216   15937 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-101000 --format={{.State.Status}}
	I0331 11:07:33.560747   15937 fix.go:103] recreateIfNeeded on kubernetes-upgrade-101000: state=Running err=<nil>
	W0331 11:07:33.560802   15937 fix.go:129] unexpected machine state, will restart: <nil>
	I0331 11:07:33.582492   15937 out.go:177] * Updating the running docker "kubernetes-upgrade-101000" container ...
	I0331 11:07:30.459464   15794 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0331 11:07:30.485680   15794 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.3/kubectl ...
	I0331 11:07:30.485698   15794 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0331 11:07:30.502126   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0331 11:07:31.159381   15794 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0331 11:07:31.159499   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:31.159530   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=945b3fc45ee9ac8e1ceaffb00a71ec22c717b10e minikube.k8s.io/name=kindnet-346000 minikube.k8s.io/updated_at=2023_03_31T11_07_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:31.293769   15794 ops.go:34] apiserver oom_adj: -16
	I0331 11:07:31.293847   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:31.875422   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:32.376038   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:32.876300   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:33.375342   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:33.875184   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:34.375171   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:34.875200   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:35.375149   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:33.603141   15937 machine.go:88] provisioning docker machine ...
	I0331 11:07:33.603196   15937 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-101000"
	I0331 11:07:33.603393   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:33.665476   15937 main.go:141] libmachine: Using SSH client type: native
	I0331 11:07:33.665874   15937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 52012 <nil> <nil>}
	I0331 11:07:33.665887   15937 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-101000 && echo "kubernetes-upgrade-101000" | sudo tee /etc/hostname
	I0331 11:07:33.810812   15937 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-101000
	
	I0331 11:07:33.810903   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:33.872432   15937 main.go:141] libmachine: Using SSH client type: native
	I0331 11:07:33.872780   15937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 52012 <nil> <nil>}
	I0331 11:07:33.872796   15937 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 11:07:34.005726   15937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:07:34.005753   15937 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16144-2324/.minikube CaCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16144-2324/.minikube}
	I0331 11:07:34.005776   15937 ubuntu.go:177] setting up certificates
	I0331 11:07:34.005788   15937 provision.go:83] configureAuth start
	I0331 11:07:34.005862   15937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-101000
	I0331 11:07:34.067427   15937 provision.go:138] copyHostCerts
	I0331 11:07:34.067530   15937 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem, removing ...
	I0331 11:07:34.067547   15937 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 11:07:34.067653   15937 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem (1078 bytes)
	I0331 11:07:34.067854   15937 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem, removing ...
	I0331 11:07:34.067862   15937 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 11:07:34.067922   15937 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem (1123 bytes)
	I0331 11:07:34.068063   15937 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem, removing ...
	I0331 11:07:34.068069   15937 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 11:07:34.068133   15937 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem (1679 bytes)
	I0331 11:07:34.068255   15937 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-101000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-101000]
	I0331 11:07:34.187126   15937 provision.go:172] copyRemoteCerts
	I0331 11:07:34.187186   15937 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 11:07:34.187234   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:34.248066   15937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52012 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:07:34.340137   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 11:07:34.358171   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0331 11:07:34.375984   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0331 11:07:34.393953   15937 provision.go:86] duration metric: configureAuth took 388.171856ms
	I0331 11:07:34.393970   15937 ubuntu.go:193] setting minikube options for container-runtime
	I0331 11:07:34.394127   15937 config.go:182] Loaded profile config "kubernetes-upgrade-101000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0331 11:07:34.394208   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:34.456614   15937 main.go:141] libmachine: Using SSH client type: native
	I0331 11:07:34.456967   15937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 52012 <nil> <nil>}
	I0331 11:07:34.456976   15937 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 11:07:34.592856   15937 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0331 11:07:34.592872   15937 ubuntu.go:71] root file system type: overlay
	I0331 11:07:34.592963   15937 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 11:07:34.593049   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:34.653361   15937 main.go:141] libmachine: Using SSH client type: native
	I0331 11:07:34.653726   15937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 52012 <nil> <nil>}
	I0331 11:07:34.653776   15937 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 11:07:34.796967   15937 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 11:07:34.797091   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:34.858883   15937 main.go:141] libmachine: Using SSH client type: native
	I0331 11:07:34.859242   15937 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 52012 <nil> <nil>}
	I0331 11:07:34.859257   15937 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 11:07:34.996691   15937 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:07:34.996709   15937 machine.go:91] provisioned docker machine in 1.393619801s
	I0331 11:07:34.996721   15937 start.go:300] post-start starting for "kubernetes-upgrade-101000" (driver="docker")
	I0331 11:07:34.996726   15937 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 11:07:34.996823   15937 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 11:07:34.996872   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:35.057434   15937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52012 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:07:35.153663   15937 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 11:07:35.157572   15937 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0331 11:07:35.157587   15937 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0331 11:07:35.157599   15937 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0331 11:07:35.157604   15937 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0331 11:07:35.157611   15937 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/addons for local assets ...
	I0331 11:07:35.157693   15937 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/files for local assets ...
	I0331 11:07:35.157858   15937 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> 28002.pem in /etc/ssl/certs
	I0331 11:07:35.158041   15937 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 11:07:35.165657   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:07:35.183164   15937 start.go:303] post-start completed in 186.443779ms
	I0331 11:07:35.183239   15937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 11:07:35.183298   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:35.245004   15937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52012 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:07:35.335569   15937 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0331 11:07:35.340565   15937 fix.go:57] fixHost completed within 1.840711718s
	I0331 11:07:35.340584   15937 start.go:83] releasing machines lock for "kubernetes-upgrade-101000", held for 1.840759012s
	I0331 11:07:35.340678   15937 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-101000
	I0331 11:07:35.402460   15937 ssh_runner.go:195] Run: cat /version.json
	I0331 11:07:35.402542   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:35.402555   15937 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 11:07:35.402675   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:35.476648   15937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52012 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:07:35.479363   15937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52012 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	W0331 11:07:35.568295   15937 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	I0331 11:07:35.568381   15937 ssh_runner.go:195] Run: systemctl --version
	I0331 11:07:35.620904   15937 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0331 11:07:35.626143   15937 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0331 11:07:35.626214   15937 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0331 11:07:35.634884   15937 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0331 11:07:35.642634   15937 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0331 11:07:35.642651   15937 start.go:481] detecting cgroup driver to use...
	I0331 11:07:35.642661   15937 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:07:35.642741   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:07:35.656054   15937 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0331 11:07:35.664961   15937 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 11:07:35.673599   15937 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 11:07:35.673662   15937 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 11:07:35.682146   15937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:07:35.690872   15937 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 11:07:35.699652   15937 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:07:35.708370   15937 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 11:07:35.716317   15937 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 11:07:35.725094   15937 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 11:07:35.732297   15937 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 11:07:35.739691   15937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:07:35.815203   15937 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 11:07:37.226421   15937 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (1.411271376s)
	I0331 11:07:37.226438   15937 start.go:481] detecting cgroup driver to use...
	I0331 11:07:37.226452   15937 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:07:37.226522   15937 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 11:07:37.247478   15937 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0331 11:07:37.247557   15937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 11:07:37.260184   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:07:37.277956   15937 ssh_runner.go:195] Run: which cri-dockerd
	I0331 11:07:37.282371   15937 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 11:07:37.290943   15937 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 11:07:37.317866   15937 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 11:07:37.442152   15937 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 11:07:37.603770   15937 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 11:07:37.603792   15937 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 11:07:37.618298   15937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:07:37.707546   15937 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:07:38.388961   15937 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 11:07:38.455557   15937 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0331 11:07:38.516771   15937 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 11:07:38.589832   15937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:07:38.673487   15937 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0331 11:07:38.690189   15937 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:07:38.758600   15937 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0331 11:07:38.878222   15937 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0331 11:07:38.878329   15937 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0331 11:07:38.882939   15937 start.go:549] Will wait 60s for crictl version
	I0331 11:07:38.883016   15937 ssh_runner.go:195] Run: which crictl
	I0331 11:07:38.887382   15937 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0331 11:07:38.921348   15937 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.2
	RuntimeApiVersion:  v1alpha2
	I0331 11:07:38.921429   15937 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:07:38.947427   15937 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:07:35.875158   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:36.376348   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:36.875168   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:37.375100   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:37.875005   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:38.375002   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:38.874972   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:39.376287   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:39.874887   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:40.375982   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:40.874913   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:41.374838   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:41.874800   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:42.374838   15794 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:07:42.444071   15794 kubeadm.go:1073] duration metric: took 11.285213883s to wait for elevateKubeSystemPrivileges.
	I0331 11:07:42.444091   15794 kubeadm.go:403] StartCluster complete in 21.265556911s
	I0331 11:07:42.444108   15794 settings.go:142] acquiring lock: {Name:mk3cb9e1bd7c44f22a996c12a2b2b34c5bbc4ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:07:42.444190   15794 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:07:42.444932   15794 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/kubeconfig: {Name:mkc0b1389479e511140b6b42bee4e1f98dfd2b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:07:42.445177   15794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0331 11:07:42.445219   15794 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0331 11:07:42.445304   15794 addons.go:66] Setting storage-provisioner=true in profile "kindnet-346000"
	I0331 11:07:42.445306   15794 addons.go:66] Setting default-storageclass=true in profile "kindnet-346000"
	I0331 11:07:42.445323   15794 addons.go:228] Setting addon storage-provisioner=true in "kindnet-346000"
	I0331 11:07:42.445339   15794 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-346000"
	I0331 11:07:42.445317   15794 config.go:182] Loaded profile config "kindnet-346000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 11:07:42.445383   15794 host.go:66] Checking if "kindnet-346000" exists ...
	I0331 11:07:42.445604   15794 cli_runner.go:164] Run: docker container inspect kindnet-346000 --format={{.State.Status}}
	I0331 11:07:42.446531   15794 cli_runner.go:164] Run: docker container inspect kindnet-346000 --format={{.State.Status}}
	I0331 11:07:38.997471   15937 out.go:204] * Preparing Kubernetes v1.27.0-rc.0 on Docker 23.0.2 ...
	I0331 11:07:38.997617   15937 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-101000 dig +short host.docker.internal
	I0331 11:07:39.117847   15937 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0331 11:07:39.118004   15937 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0331 11:07:39.124711   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:39.193227   15937 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 11:07:39.193300   15937 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:07:39.226501   15937 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.0-rc.0
	registry.k8s.io/kube-proxy:v1.27.0-rc.0
	registry.k8s.io/kube-controller-manager:v1.27.0-rc.0
	registry.k8s.io/kube-scheduler:v1.27.0-rc.0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0331 11:07:39.226539   15937 docker.go:569] Images already preloaded, skipping extraction
	I0331 11:07:39.226648   15937 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:07:39.311437   15937 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.0-rc.0
	registry.k8s.io/kube-proxy:v1.27.0-rc.0
	registry.k8s.io/kube-controller-manager:v1.27.0-rc.0
	registry.k8s.io/kube-scheduler:v1.27.0-rc.0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0331 11:07:39.311458   15937 cache_images.go:84] Images are preloaded, skipping loading
	I0331 11:07:39.311542   15937 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 11:07:39.407671   15937 cni.go:84] Creating CNI manager for ""
	I0331 11:07:39.407696   15937 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:07:39.407724   15937 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 11:07:39.407742   15937 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-101000 NodeName:kubernetes-upgrade-101000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 11:07:39.407961   15937 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-101000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 11:07:39.408127   15937 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.0-rc.0 ClusterName:kubernetes-upgrade-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 11:07:39.408225   15937 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.0-rc.0
	I0331 11:07:39.418556   15937 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 11:07:39.418632   15937 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 11:07:39.428714   15937 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0331 11:07:39.446835   15937 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0331 11:07:39.463582   15937 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0331 11:07:39.513171   15937 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0331 11:07:39.517617   15937 certs.go:56] Setting up /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000 for IP: 192.168.67.2
	I0331 11:07:39.517639   15937 certs.go:186] acquiring lock for shared ca certs: {Name:mk1ddc355573fb6044e73c93dd0e9bf4bae32052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:07:39.517814   15937 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key
	I0331 11:07:39.517913   15937 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key
	I0331 11:07:39.518002   15937 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.key
	I0331 11:07:39.518090   15937 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.key.c7fa3a9e
	I0331 11:07:39.518156   15937 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.key
	I0331 11:07:39.518382   15937 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem (1338 bytes)
	W0331 11:07:39.518419   15937 certs.go:397] ignoring /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800_empty.pem, impossibly tiny 0 bytes
	I0331 11:07:39.518431   15937 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem (1679 bytes)
	I0331 11:07:39.518465   15937 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem (1078 bytes)
	I0331 11:07:39.518501   15937 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem (1123 bytes)
	I0331 11:07:39.518531   15937 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem (1679 bytes)
	I0331 11:07:39.518602   15937 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:07:39.519193   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 11:07:39.539990   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0331 11:07:39.609630   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 11:07:39.628943   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 11:07:39.648555   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 11:07:39.713487   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0331 11:07:39.731500   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 11:07:39.749780   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 11:07:39.768087   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 11:07:39.786225   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem --> /usr/share/ca-certificates/2800.pem (1338 bytes)
	I0331 11:07:39.820099   15937 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /usr/share/ca-certificates/28002.pem (1708 bytes)
	I0331 11:07:39.838599   15937 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 11:07:39.853719   15937 ssh_runner.go:195] Run: openssl version
	I0331 11:07:39.859652   15937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 11:07:39.868050   15937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:07:39.872068   15937 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:07:39.872106   15937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:07:39.877589   15937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 11:07:39.885984   15937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2800.pem && ln -fs /usr/share/ca-certificates/2800.pem /etc/ssl/certs/2800.pem"
	I0331 11:07:39.903911   15937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2800.pem
	I0331 11:07:39.908714   15937 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/2800.pem
	I0331 11:07:39.908779   15937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2800.pem
	I0331 11:07:39.914854   15937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2800.pem /etc/ssl/certs/51391683.0"
	I0331 11:07:39.923493   15937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28002.pem && ln -fs /usr/share/ca-certificates/28002.pem /etc/ssl/certs/28002.pem"
	I0331 11:07:39.933761   15937 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28002.pem
	I0331 11:07:39.938034   15937 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/28002.pem
	I0331 11:07:39.938097   15937 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28002.pem
	I0331 11:07:39.944370   15937 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28002.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 11:07:39.952583   15937 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:kubernetes-upgrade-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:07:39.952686   15937 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:07:39.974036   15937 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 11:07:39.982359   15937 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0331 11:07:39.982374   15937 kubeadm.go:633] restartCluster start
	I0331 11:07:39.982444   15937 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0331 11:07:39.989935   15937 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:07:39.990012   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:40.061137   15937 kubeconfig.go:92] found "kubernetes-upgrade-101000" server: "https://127.0.0.1:52016"
	I0331 11:07:40.061778   15937 kapi.go:59] client config for kubernetes-upgrade-101000: &rest.Config{Host:"https://127.0.0.1:52016", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.key", CAFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24efe00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0331 11:07:40.062590   15937 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0331 11:07:40.071942   15937 api_server.go:165] Checking apiserver status ...
	I0331 11:07:40.072007   15937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:07:40.081622   15937 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/14423/cgroup
	W0331 11:07:40.090613   15937 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/14423/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:07:40.090683   15937 ssh_runner.go:195] Run: ls
	I0331 11:07:40.094821   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:42.036857   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0331 11:07:42.036911   15937 retry.go:31] will retry after 281.079209ms: https://127.0.0.1:52016/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0331 11:07:42.318684   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:42.325593   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:07:42.325612   15937 retry.go:31] will retry after 304.087945ms: https://127.0.0.1:52016/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:07:42.540736   15794 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:07:42.599002   15794 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0331 11:07:42.599027   15794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0331 11:07:42.599216   15794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-346000
	I0331 11:07:42.603603   15794 addons.go:228] Setting addon default-storageclass=true in "kindnet-346000"
	I0331 11:07:42.603637   15794 host.go:66] Checking if "kindnet-346000" exists ...
	I0331 11:07:42.603989   15794 cli_runner.go:164] Run: docker container inspect kindnet-346000 --format={{.State.Status}}
	I0331 11:07:42.611858   15794 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0331 11:07:42.676403   15794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kindnet-346000/id_rsa Username:docker}
	I0331 11:07:42.676444   15794 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0331 11:07:42.676457   15794 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0331 11:07:42.676527   15794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-346000
	I0331 11:07:42.750044   15794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kindnet-346000/id_rsa Username:docker}
	I0331 11:07:42.911444   15794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0331 11:07:42.933059   15794 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0331 11:07:43.021364   15794 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-346000" context rescaled to 1 replicas
	I0331 11:07:43.021395   15794 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 11:07:43.028365   15794 start.go:916] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0331 11:07:43.044231   15794 out.go:177] * Verifying Kubernetes components...
	I0331 11:07:43.065205   15794 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:07:43.466758   15794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-346000
	I0331 11:07:43.490875   15794 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0331 11:07:43.511355   15794 addons.go:499] enable addons completed in 1.066155069s: enabled=[storage-provisioner default-storageclass]
	I0331 11:07:43.548836   15794 node_ready.go:35] waiting up to 15m0s for node "kindnet-346000" to be "Ready" ...
	I0331 11:07:43.553394   15794 node_ready.go:49] node "kindnet-346000" has status "Ready":"True"
	I0331 11:07:43.553411   15794 node_ready.go:38] duration metric: took 4.547659ms waiting for node "kindnet-346000" to be "Ready" ...
	I0331 11:07:43.553422   15794 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 11:07:43.564605   15794 pod_ready.go:78] waiting up to 15m0s for pod "coredns-787d4945fb-dkvfw" in "kube-system" namespace to be "Ready" ...
	I0331 11:07:42.629806   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:42.636595   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:07:42.636648   15937 retry.go:31] will retry after 302.896936ms: https://127.0.0.1:52016/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:07:42.939609   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:42.945325   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:07:42.945348   15937 retry.go:31] will retry after 370.390904ms: https://127.0.0.1:52016/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:07:43.315882   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:43.342619   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 200:
	ok
	I0331 11:07:43.359186   15937 system_pods.go:86] 5 kube-system pods found
	I0331 11:07:43.359208   15937 system_pods.go:89] "etcd-kubernetes-upgrade-101000" [65e14460-0431-4f4f-9883-bf5bdbf24185] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0331 11:07:43.359219   15937 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-101000" [cfb43e69-cc81-4c06-ace7-a87ba00c90f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0331 11:07:43.359233   15937 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-101000" [b981a3a5-fc26-492f-a3e3-9552432b0cb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0331 11:07:43.359241   15937 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-101000" [b5ae2c92-5195-4bf0-9d28-46be0486950f] Pending
	I0331 11:07:43.359245   15937 system_pods.go:89] "storage-provisioner" [c4be6d2b-ecff-4572-b8a2-55d819be0f31] Pending
	I0331 11:07:43.359251   15937 kubeadm.go:617] needs reconfigure: missing components: kube-dns, kube-proxy, kube-scheduler
	I0331 11:07:43.359258   15937 kubeadm.go:1120] stopping kube-system containers ...
	I0331 11:07:43.359326   15937 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:07:43.384249   15937 docker.go:465] Stopping containers: [ff159a5c42e4 82313a0f2c4e 8bcf275fa409 1a9e79f95219 2d01e1db886a 3e6dbf74be18 624194a3b4b8 ce2a0bdbecb0 9aedaa4822a1 37f831f205fe d6142a98248a 3db90f6388db 36df6b587d9c f94cac283e75 77ee69b4c243 b434f2d884e4 eca3143ab184]
	I0331 11:07:43.384338   15937 ssh_runner.go:195] Run: docker stop ff159a5c42e4 82313a0f2c4e 8bcf275fa409 1a9e79f95219 2d01e1db886a 3e6dbf74be18 624194a3b4b8 ce2a0bdbecb0 9aedaa4822a1 37f831f205fe d6142a98248a 3db90f6388db 36df6b587d9c f94cac283e75 77ee69b4c243 b434f2d884e4 eca3143ab184
	I0331 11:07:44.338326   15937 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0331 11:07:44.425449   15937 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:07:44.435742   15937 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Mar 31 18:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Mar 31 18:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Mar 31 18:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Mar 31 18:03 /etc/kubernetes/scheduler.conf
	
	I0331 11:07:44.435840   15937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0331 11:07:44.445708   15937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0331 11:07:44.456180   15937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0331 11:07:44.503299   15937 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0331 11:07:44.512050   15937 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:07:44.520948   15937 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0331 11:07:44.520961   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:07:44.569103   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:07:45.118961   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:07:45.264493   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:07:45.329803   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:07:45.413444   15937 api_server.go:51] waiting for apiserver process to appear ...
	I0331 11:07:45.413542   15937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:07:45.927725   15937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:07:46.427672   15937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:07:46.440045   15937 api_server.go:71] duration metric: took 1.026669519s to wait for apiserver process to appear ...
	I0331 11:07:46.440074   15937 api_server.go:87] waiting for apiserver healthz status ...
	I0331 11:07:46.440091   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:48.536890   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0331 11:07:48.536917   15937 api_server.go:102] status: https://127.0.0.1:52016/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0331 11:07:49.036980   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:49.043789   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0331 11:07:49.043806   15937 api_server.go:102] status: https://127.0.0.1:52016/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:07:49.537416   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:49.546845   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0331 11:07:49.546871   15937 api_server.go:102] status: https://127.0.0.1:52016/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:07:50.037294   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:50.043291   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 200:
	ok
	I0331 11:07:50.051487   15937 api_server.go:140] control plane version: v1.27.0-rc.0
	I0331 11:07:50.051501   15937 api_server.go:130] duration metric: took 3.61159925s to wait for apiserver health ...
	I0331 11:07:50.051510   15937 cni.go:84] Creating CNI manager for ""
	I0331 11:07:50.051519   15937 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:07:50.073805   15937 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0331 11:07:50.093755   15937 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0331 11:07:50.103850   15937 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0331 11:07:50.119249   15937 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 11:07:50.125625   15937 system_pods.go:59] 5 kube-system pods found
	I0331 11:07:50.125643   15937 system_pods.go:61] "etcd-kubernetes-upgrade-101000" [65e14460-0431-4f4f-9883-bf5bdbf24185] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0331 11:07:50.125651   15937 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-101000" [cfb43e69-cc81-4c06-ace7-a87ba00c90f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0331 11:07:50.125662   15937 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-101000" [b981a3a5-fc26-492f-a3e3-9552432b0cb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0331 11:07:50.125668   15937 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-101000" [b5ae2c92-5195-4bf0-9d28-46be0486950f] Pending
	I0331 11:07:50.125672   15937 system_pods.go:61] "storage-provisioner" [c4be6d2b-ecff-4572-b8a2-55d819be0f31] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0331 11:07:50.125677   15937 system_pods.go:74] duration metric: took 6.411271ms to wait for pod list to return data ...
	I0331 11:07:50.125684   15937 node_conditions.go:102] verifying NodePressure condition ...
	I0331 11:07:50.129641   15937 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0331 11:07:50.129660   15937 node_conditions.go:123] node cpu capacity is 6
	I0331 11:07:50.129674   15937 node_conditions.go:105] duration metric: took 3.985034ms to run NodePressure ...
	I0331 11:07:50.129689   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:07:50.276337   15937 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0331 11:07:50.284981   15937 ops.go:34] apiserver oom_adj: -16
	I0331 11:07:50.284991   15937 kubeadm.go:637] restartCluster took 10.303110661s
	I0331 11:07:50.284998   15937 kubeadm.go:403] StartCluster complete in 10.33294058s
	I0331 11:07:50.285011   15937 settings.go:142] acquiring lock: {Name:mk3cb9e1bd7c44f22a996c12a2b2b34c5bbc4ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:07:50.285087   15937 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:07:50.285720   15937 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/kubeconfig: {Name:mkc0b1389479e511140b6b42bee4e1f98dfd2b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:07:50.285970   15937 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0331 11:07:50.285994   15937 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0331 11:07:50.286053   15937 addons.go:66] Setting storage-provisioner=true in profile "kubernetes-upgrade-101000"
	I0331 11:07:50.286061   15937 addons.go:66] Setting default-storageclass=true in profile "kubernetes-upgrade-101000"
	I0331 11:07:50.286068   15937 addons.go:228] Setting addon storage-provisioner=true in "kubernetes-upgrade-101000"
	I0331 11:07:50.286073   15937 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-101000"
	W0331 11:07:50.286076   15937 addons.go:237] addon storage-provisioner should already be in state true
	I0331 11:07:50.286114   15937 host.go:66] Checking if "kubernetes-upgrade-101000" exists ...
	I0331 11:07:50.286113   15937 config.go:182] Loaded profile config "kubernetes-upgrade-101000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0331 11:07:50.286352   15937 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-101000 --format={{.State.Status}}
	I0331 11:07:50.286415   15937 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-101000 --format={{.State.Status}}
	I0331 11:07:50.286505   15937 kapi.go:59] client config for kubernetes-upgrade-101000: &rest.Config{Host:"https://127.0.0.1:52016", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.key", CAFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24efe00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0331 11:07:50.292801   15937 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-101000" context rescaled to 1 replicas
	I0331 11:07:50.292839   15937 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 11:07:50.314157   15937 out.go:177] * Verifying Kubernetes components...
	I0331 11:07:50.355970   15937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:07:50.375019   15937 kapi.go:59] client config for kubernetes-upgrade-101000: &rest.Config{Host:"https://127.0.0.1:52016", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubernetes-upgrade-101000/client.key", CAFile:"/Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24efe00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0331 11:07:50.396768   15937 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:07:45.591587   15794 pod_ready.go:102] pod "coredns-787d4945fb-dkvfw" in "kube-system" namespace has status "Ready":"False"
	I0331 11:07:48.080557   15794 pod_ready.go:102] pod "coredns-787d4945fb-dkvfw" in "kube-system" namespace has status "Ready":"False"
	I0331 11:07:50.109259   15794 pod_ready.go:102] pod "coredns-787d4945fb-dkvfw" in "kube-system" namespace has status "Ready":"False"
	I0331 11:07:50.383785   15937 addons.go:228] Setting addon default-storageclass=true in "kubernetes-upgrade-101000"
	I0331 11:07:50.387309   15937 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0331 11:07:50.387333   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	W0331 11:07:50.418030   15937 addons.go:237] addon default-storageclass should already be in state true
	I0331 11:07:50.418070   15937 host.go:66] Checking if "kubernetes-upgrade-101000" exists ...
	I0331 11:07:50.418098   15937 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0331 11:07:50.418113   15937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0331 11:07:50.418187   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:50.420425   15937 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-101000 --format={{.State.Status}}
	I0331 11:07:50.496042   15937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52012 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:07:50.496097   15937 api_server.go:51] waiting for apiserver process to appear ...
	I0331 11:07:50.496214   15937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:07:50.501541   15937 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0331 11:07:50.501556   15937 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0331 11:07:50.501647   15937 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-101000
	I0331 11:07:50.510598   15937 api_server.go:71] duration metric: took 217.738937ms to wait for apiserver process to appear ...
	I0331 11:07:50.510627   15937 api_server.go:87] waiting for apiserver healthz status ...
	I0331 11:07:50.510640   15937 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:52016/healthz ...
	I0331 11:07:50.517304   15937 api_server.go:278] https://127.0.0.1:52016/healthz returned 200:
	ok
	I0331 11:07:50.519590   15937 api_server.go:140] control plane version: v1.27.0-rc.0
	I0331 11:07:50.519612   15937 api_server.go:130] duration metric: took 8.977505ms to wait for apiserver health ...
	I0331 11:07:50.519621   15937 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 11:07:50.526720   15937 system_pods.go:59] 5 kube-system pods found
	I0331 11:07:50.526746   15937 system_pods.go:61] "etcd-kubernetes-upgrade-101000" [65e14460-0431-4f4f-9883-bf5bdbf24185] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0331 11:07:50.526760   15937 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-101000" [cfb43e69-cc81-4c06-ace7-a87ba00c90f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0331 11:07:50.526774   15937 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-101000" [b981a3a5-fc26-492f-a3e3-9552432b0cb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0331 11:07:50.526785   15937 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-101000" [b5ae2c92-5195-4bf0-9d28-46be0486950f] Pending
	I0331 11:07:50.526792   15937 system_pods.go:61] "storage-provisioner" [c4be6d2b-ecff-4572-b8a2-55d819be0f31] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0331 11:07:50.526802   15937 system_pods.go:74] duration metric: took 7.168013ms to wait for pod list to return data ...
	I0331 11:07:50.526813   15937 kubeadm.go:578] duration metric: took 233.964339ms to wait for : map[apiserver:true system_pods:true] ...
	I0331 11:07:50.526825   15937 node_conditions.go:102] verifying NodePressure condition ...
	I0331 11:07:50.531056   15937 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0331 11:07:50.531076   15937 node_conditions.go:123] node cpu capacity is 6
	I0331 11:07:50.531086   15937 node_conditions.go:105] duration metric: took 4.253233ms to run NodePressure ...
	I0331 11:07:50.531097   15937 start.go:228] waiting for startup goroutines ...
	I0331 11:07:50.579420   15937 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52012 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/kubernetes-upgrade-101000/id_rsa Username:docker}
	I0331 11:07:50.609648   15937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0331 11:07:50.688606   15937 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0331 11:07:51.287595   15937 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0331 11:07:51.328914   15937 addons.go:499] enable addons completed in 1.042981048s: enabled=[storage-provisioner default-storageclass]
	I0331 11:07:51.328972   15937 start.go:233] waiting for cluster config update ...
	I0331 11:07:51.328987   15937 start.go:242] writing updated cluster config ...
	I0331 11:07:51.329336   15937 ssh_runner.go:195] Run: rm -f paused
	I0331 11:07:51.372484   15937 start.go:557] kubectl: 1.25.4, cluster: 1.27.0-rc.0 (minor skew: 2)
	I0331 11:07:51.409120   15937 out.go:177] 
	W0331 11:07:51.430060   15937 out.go:239] ! /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.27.0-rc.0.
	I0331 11:07:51.450929   15937 out.go:177]   - Want kubectl v1.27.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0331 11:07:51.493222   15937 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-101000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-03-31 18:05:52 UTC, end at Fri 2023-03-31 18:07:52 UTC. --
	Mar 31 18:07:38 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:38Z" level=info msg="Docker cri networking managed by network plugin cni"
	Mar 31 18:07:38 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:38Z" level=info msg="Docker Info: &{ID:614b6d08-ec7d-4926-9ee5-baeb7b593286 Containers:13 ContainersRunning:0 ContainersPaused:0 ContainersStopped:13 Images:15 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:false NGoroutines:32 SystemTime:2023-03-31T18:07:38.869501687Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:5.15.49-linuxkit Opera
tingSystem:Ubuntu 20.04.5 LTS OSVersion:20.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0000cf6c0 NCPU:6 MemTotal:6231724032 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:kubernetes-upgrade-101000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:23.0.2 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f Expected:1e1ea6e986c6c86565bc33d52e34b81b3e2bc71f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroup
ns] ProductLicense: DefaultAddressPools:[] Warnings:[]}"
	Mar 31 18:07:38 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:38Z" level=info msg="Setting cgroupDriver cgroupfs"
	Mar 31 18:07:38 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:38Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Mar 31 18:07:38 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:38Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Mar 31 18:07:38 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:38Z" level=info msg="Start cri-dockerd grpc backend"
	Mar 31 18:07:38 kubernetes-upgrade-101000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Mar 31 18:07:39 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e6dbf74be18b89a9ee76560a2ed7bf692ad441e23b0865cd01a33475c61040e/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 31 18:07:39 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a9e79f95219ce9e57e431c1785ecc757899e7ad9248ce07c304742d1a1a3f0f/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 31 18:07:39 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2d01e1db886ab7f63774f0f72f983f01cf59a40cd09c866c6b5c20121b6db142/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 31 18:07:39 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8bcf275fa409a59899f46f021723e5bd8c3346eb94520bed1f2937881f665116/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 31 18:07:43 kubernetes-upgrade-101000 dockerd[13719]: time="2023-03-31T18:07:43.448828668Z" level=info msg="ignoring event" container=3e6dbf74be18b89a9ee76560a2ed7bf692ad441e23b0865cd01a33475c61040e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 31 18:07:43 kubernetes-upgrade-101000 dockerd[13719]: time="2023-03-31T18:07:43.453484329Z" level=info msg="ignoring event" container=2d01e1db886ab7f63774f0f72f983f01cf59a40cd09c866c6b5c20121b6db142 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 31 18:07:43 kubernetes-upgrade-101000 dockerd[13719]: time="2023-03-31T18:07:43.458801850Z" level=info msg="ignoring event" container=8bcf275fa409a59899f46f021723e5bd8c3346eb94520bed1f2937881f665116 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 31 18:07:43 kubernetes-upgrade-101000 dockerd[13719]: time="2023-03-31T18:07:43.460146289Z" level=info msg="ignoring event" container=ff159a5c42e4c3c48794fbd56bcb96b41807607d85b53801c2c89547943a9f61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 31 18:07:43 kubernetes-upgrade-101000 dockerd[13719]: time="2023-03-31T18:07:43.466485812Z" level=info msg="ignoring event" container=1a9e79f95219ce9e57e431c1785ecc757899e7ad9248ce07c304742d1a1a3f0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 31 18:07:44 kubernetes-upgrade-101000 dockerd[13719]: time="2023-03-31T18:07:44.303782612Z" level=info msg="ignoring event" container=82313a0f2c4ee5cfea1d34b0160ed058efdbb4574ad91490dc4179f72ad6bbeb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 31 18:07:44 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65dadd1510ecafab3af2d5880399045a9b02cf665d8020a6b773b5743b993bdf/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 31 18:07:44 kubernetes-upgrade-101000 cri-dockerd[14009]: W0331 18:07:44.425694   14009 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Mar 31 18:07:44 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/930f3bf41abbf3073a46c75b93189ec92d98b9bcd489887bdaf6b8c6c823e867/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 31 18:07:44 kubernetes-upgrade-101000 cri-dockerd[14009]: W0331 18:07:44.432418   14009 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Mar 31 18:07:44 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/55a7d2880205c6b8df792bf2b520b514532b3644d07546276be3301e623487dd/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 31 18:07:44 kubernetes-upgrade-101000 cri-dockerd[14009]: W0331 18:07:44.454186   14009 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Mar 31 18:07:44 kubernetes-upgrade-101000 cri-dockerd[14009]: time="2023-03-31T18:07:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/de616d4efea439cfbc1c3ad814ff45dc1290a1b7a000167d4564d2e36b37509d/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 31 18:07:44 kubernetes-upgrade-101000 cri-dockerd[14009]: W0331 18:07:44.526169   14009 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a42d1934ea7d5       9f9d741d7f1c5       6 seconds ago       Running             kube-controller-manager   3                   65dadd1510eca
	da996a355fe6f       d468edbd6d11a       7 seconds ago       Running             kube-scheduler            2                   930f3bf41abbf
	1000eebdcd876       2e5f542d09de7       7 seconds ago       Running             kube-apiserver            2                   de616d4efea43
	65e76332d3d40       86b6af7dd652c       7 seconds ago       Running             etcd                      2                   55a7d2880205c
	ff159a5c42e4c       86b6af7dd652c       13 seconds ago      Exited              etcd                      1                   8bcf275fa409a
	82313a0f2c4ee       2e5f542d09de7       13 seconds ago      Exited              kube-apiserver            1                   1a9e79f95219c
	ce2a0bdbecb0f       9f9d741d7f1c5       16 seconds ago      Exited              kube-controller-manager   2                   37f831f205fe6
	9aedaa4822a16       d468edbd6d11a       16 seconds ago      Exited              kube-scheduler            1                   d6142a98248a9
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-101000
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-101000
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 Mar 2023 18:06:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-101000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 Mar 2023 18:07:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 Mar 2023 18:07:48 +0000   Fri, 31 Mar 2023 18:06:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 Mar 2023 18:07:48 +0000   Fri, 31 Mar 2023 18:06:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 Mar 2023 18:07:48 +0000   Fri, 31 Mar 2023 18:06:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 Mar 2023 18:07:48 +0000   Fri, 31 Mar 2023 18:07:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-101000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085668Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085668Ki
	  pods:               110
	System Info:
	  Machine ID:                 d293d9137ded4893a7168cb94a7bb5ae
	  System UUID:                d293d9137ded4893a7168cb94a7bb5ae
	  Boot ID:                    dd9e0c7f-06c2-46fa-9da6-a2d6b719e168
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.2
	  Kubelet Version:            v1.27.0-rc.0
	  Kube-Proxy Version:         v1.27.0-rc.0
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-101000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         27s
	  kube-system                 kube-apiserver-kubernetes-upgrade-101000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-101000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-scheduler-kubernetes-upgrade-101000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From     Message
	  ----    ------                   ----                 ----     -------
	  Normal  Starting                 105s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet  Node kubernetes-upgrade-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet  Node kubernetes-upgrade-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet  Node kubernetes-upgrade-101000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 8s                   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)      kubelet  Node kubernetes-upgrade-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)      kubelet  Node kubernetes-upgrade-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)      kubelet  Node kubernetes-upgrade-101000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                   kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000042] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000072] FS-Cache: N-cookie d=000000009a6ef848{9p.inode} n=0000000014db9805
	[  +0.000055] FS-Cache: N-key=[8] '3ecaa40500000000'
	[  +0.003062] FS-Cache: Duplicate cookie detected
	[  +0.000076] FS-Cache: O-cookie c=00000006 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000082] FS-Cache: O-cookie d=000000009a6ef848{9p.inode} n=000000003be55c32
	[  +0.000063] FS-Cache: O-key=[8] '3ecaa40500000000'
	[  +0.000064] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000091] FS-Cache: N-cookie d=000000009a6ef848{9p.inode} n=0000000026ea58e3
	[  +0.000104] FS-Cache: N-key=[8] '3ecaa40500000000'
	[  +3.722799] FS-Cache: Duplicate cookie detected
	[  +0.000052] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000081] FS-Cache: O-cookie d=000000009a6ef848{9p.inode} n=00000000c0288674
	[  +0.000040] FS-Cache: O-key=[8] '3dcaa40500000000'
	[  +0.000056] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000163] FS-Cache: N-cookie d=000000009a6ef848{9p.inode} n=00000000334ea85c
	[  +0.000054] FS-Cache: N-key=[8] '3dcaa40500000000'
	[  +0.499375] FS-Cache: Duplicate cookie detected
	[  +0.000093] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000041] FS-Cache: O-cookie d=000000009a6ef848{9p.inode} n=00000000ec6304d1
	[  +0.000085] FS-Cache: O-key=[8] '46caa40500000000'
	[  +0.000109] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000067] FS-Cache: N-cookie d=000000009a6ef848{9p.inode} n=00000000652889d1
	[  +0.000221] FS-Cache: N-key=[8] '46caa40500000000'
	[Mar31 17:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> etcd [65e76332d3d4] <==
	* {"level":"info","ts":"2023-03-31T18:07:46.122Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-03-31T18:07:46.123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-03-31T18:07:46.123Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-03-31T18:07:46.123Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.3"}
	{"level":"info","ts":"2023-03-31T18:07:46.123Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.3"}
	{"level":"info","ts":"2023-03-31T18:07:46.124Z","caller":"membership/cluster.go:576","msg":"updated cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","from":"3.3","to":"3.5"}
	{"level":"info","ts":"2023-03-31T18:07:46.126Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-31T18:07:46.127Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-31T18:07:46.127Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-31T18:07:46.127Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-31T18:07:46.127Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-31T18:07:47.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
	{"level":"info","ts":"2023-03-31T18:07:47.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-03-31T18:07:47.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-31T18:07:47.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
	{"level":"info","ts":"2023-03-31T18:07:47.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-03-31T18:07:47.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
	{"level":"info","ts":"2023-03-31T18:07:47.611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-03-31T18:07:47.613Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-101000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-31T18:07:47.613Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-31T18:07:47.614Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-31T18:07:47.614Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-31T18:07:47.614Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-31T18:07:47.614Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-31T18:07:47.615Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> etcd [ff159a5c42e4] <==
	* {"level":"info","ts":"2023-03-31T18:07:39.644Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-31T18:07:39.644Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-31T18:07:39.644Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-31T18:07:39.644Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-31T18:07:39.644Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-31T18:07:41.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-03-31T18:07:41.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-03-31T18:07:41.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-03-31T18:07:41.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-03-31T18:07:41.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-31T18:07:41.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-03-31T18:07:41.138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-31T18:07:41.140Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-101000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-31T18:07:41.140Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-31T18:07:41.140Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-31T18:07:41.141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-31T18:07:41.141Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-31T18:07:41.142Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-31T18:07:41.142Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-03-31T18:07:43.423Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-31T18:07:43.423Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-101000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-03-31T18:07:43.433Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-03-31T18:07:43.435Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-31T18:07:43.437Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-31T18:07:43.437Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-101000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:07:53 up  1:06,  0 users,  load average: 3.33, 2.01, 1.49
	Linux kubernetes-upgrade-101000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [1000eebdcd87] <==
	* I0331 18:07:48.529108       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0331 18:07:48.529133       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0331 18:07:48.529145       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0331 18:07:48.529174       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0331 18:07:48.529330       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0331 18:07:48.530207       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0331 18:07:48.530261       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	E0331 18:07:48.546540       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0331 18:07:48.547583       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0331 18:07:48.550449       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0331 18:07:48.622749       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0331 18:07:48.622845       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0331 18:07:48.622882       1 shared_informer.go:318] Caches are synced for configmaps
	I0331 18:07:48.623086       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0331 18:07:48.623090       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0331 18:07:48.630226       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0331 18:07:48.630316       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0331 18:07:48.630279       1 cache.go:39] Caches are synced for autoregister controller
	I0331 18:07:49.336588       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0331 18:07:49.524229       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0331 18:07:50.202085       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0331 18:07:50.215018       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0331 18:07:50.240989       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0331 18:07:50.262715       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0331 18:07:50.269119       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [82313a0f2c4e] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0331 18:07:43.428460       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0331 18:07:43.428494       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0331 18:07:43.428686       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [a42d1934ea7d] <==
	* I0331 18:07:50.626685       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0331 18:07:50.633599       1 controllermanager.go:638] "Started controller" controller="ttl"
	I0331 18:07:50.633708       1 ttl_controller.go:124] "Starting TTL controller"
	I0331 18:07:50.633714       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0331 18:07:50.641885       1 controllermanager.go:638] "Started controller" controller="tokencleaner"
	I0331 18:07:50.641907       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0331 18:07:50.642013       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0331 18:07:50.642022       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0331 18:07:50.652868       1 controllermanager.go:638] "Started controller" controller="clusterrole-aggregation"
	I0331 18:07:50.653001       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0331 18:07:50.653010       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0331 18:07:50.659699       1 shared_informer.go:318] Caches are synced for tokens
	I0331 18:07:50.663743       1 controllermanager.go:638] "Started controller" controller="serviceaccount"
	I0331 18:07:50.663785       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0331 18:07:50.663791       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0331 18:07:50.961750       1 controllermanager.go:638] "Started controller" controller="horizontalpodautoscaling"
	I0331 18:07:50.961809       1 horizontal.go:200] "Starting HPA controller"
	I0331 18:07:50.961816       1 shared_informer.go:311] Waiting for caches to sync for HPA
	E0331 18:07:51.113953       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0331 18:07:51.113999       1 controllermanager.go:616] "Warning: skipping controller" controller="service"
	I0331 18:07:51.114008       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0331 18:07:51.114013       1 controllermanager.go:616] "Warning: skipping controller" controller="route"
	I0331 18:07:51.291202       1 controllermanager.go:638] "Started controller" controller="attachdetach"
	I0331 18:07:51.291408       1 attach_detach_controller.go:343] "Starting attach detach controller"
	I0331 18:07:51.291456       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	
	* 
	* ==> kube-controller-manager [ce2a0bdbecb0] <==
	* I0331 18:07:36.654768       1 serving.go:348] Generated self-signed cert in-memory
	I0331 18:07:36.859449       1 controllermanager.go:187] "Starting" version="v1.27.0-rc.0"
	I0331 18:07:36.859488       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0331 18:07:36.860536       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0331 18:07:36.860565       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0331 18:07:36.861015       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0331 18:07:36.861097       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [9aedaa4822a1] <==
	* W0331 18:07:36.847064       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847084       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.847226       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847272       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.847233       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847496       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.847229       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847536       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.847450       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://control-plane.minikube.internal:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847550       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://control-plane.minikube.internal:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.847566       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847596       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.847733       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847777       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.847908       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.847959       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.848270       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.848388       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W0331 18:07:36.848448       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E0331 18:07:36.848511       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	I0331 18:07:37.411504       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0331 18:07:37.411604       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0331 18:07:37.411616       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0331 18:07:37.411728       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [da996a355fe6] <==
	* I0331 18:07:46.497593       1 serving.go:348] Generated self-signed cert in-memory
	I0331 18:07:48.608971       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.0-rc.0"
	I0331 18:07:48.609013       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0331 18:07:48.613225       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0331 18:07:48.613262       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0331 18:07:48.613265       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0331 18:07:48.613275       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0331 18:07:48.613303       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0331 18:07:48.613313       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0331 18:07:48.613789       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0331 18:07:48.613821       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0331 18:07:48.714914       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0331 18:07:48.714930       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0331 18:07:48.715845       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-03-31 18:05:52 UTC, end at Fri 2023-03-31 18:07:54 UTC. --
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.609690   14991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b0409d04c678728f1efe3dc2382bbda-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-101000\" (UID: \"3b0409d04c678728f1efe3dc2382bbda\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-101000"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.609716   14991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b0409d04c678728f1efe3dc2382bbda-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-101000\" (UID: \"3b0409d04c678728f1efe3dc2382bbda\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-101000"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.612089   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e6dbf74be18b89a9ee76560a2ed7bf692ad441e23b0865cd01a33475c61040e"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.612111   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6142a98248a9851bba14771b7ad78c0b9df26c5da65f9b8d89f97e4987d57cd"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.612120   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77ee69b4c243397d13024d6f3d00cca5438dfeb66c0316ad027c2aabf33eeaaa"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.612132   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bcf275fa409a59899f46f021723e5bd8c3346eb94520bed1f2937881f665116"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.612139   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="624194a3b4b80317fd1ce4a7b8f5f38274e4e352b2c9f2eae37c235f1467c44a"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.612169   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36df6b587d9cb717c48b6fb1dd3021b9427c4534c2d69ea47d29b465725ebcfd"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.612184   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a9e79f95219ce9e57e431c1785ecc757899e7ad9248ce07c304742d1a1a3f0f"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.612192   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eca3143ab184d21af646ab7c82738ae04c3bb2a223e69402b65fe81d94be96b7"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.737629   14991 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-101000"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: E0331 18:07:45.737950   14991 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-101000"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.881539   14991 scope.go:115] "RemoveContainer" containerID="ff159a5c42e4c3c48794fbd56bcb96b41807607d85b53801c2c89547943a9f61"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.902461   14991 scope.go:115] "RemoveContainer" containerID="82313a0f2c4ee5cfea1d34b0160ed058efdbb4574ad91490dc4179f72ad6bbeb"
	Mar 31 18:07:45 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:45.926170   14991 scope.go:115] "RemoveContainer" containerID="9aedaa4822a1650768595c8e00f0527f504ac7dd680726c38ec0583f0959b1dc"
	Mar 31 18:07:46 kubernetes-upgrade-101000 kubelet[14991]: E0331 18:07:46.011188   14991 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-101000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="800ms"
	Mar 31 18:07:46 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:46.149799   14991 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-101000"
	Mar 31 18:07:46 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:46.634055   14991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37f831f205fe631447c92521789a45b4b15ead4f042c5211231739e0a3a79d72"
	Mar 31 18:07:46 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:46.644623   14991 scope.go:115] "RemoveContainer" containerID="ce2a0bdbecb0fe01273f0044396f78ffc1dee76a4fb61c4f404ab7c7e0b57c56"
	Mar 31 18:07:48 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:48.640233   14991 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-101000"
	Mar 31 18:07:48 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:48.640346   14991 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-101000"
	Mar 31 18:07:48 kubernetes-upgrade-101000 kubelet[14991]: E0331 18:07:48.660383   14991 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-101000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-101000"
	Mar 31 18:07:49 kubernetes-upgrade-101000 kubelet[14991]: E0331 18:07:49.279708   14991 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-101000\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-101000"
	Mar 31 18:07:49 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:49.378177   14991 apiserver.go:52] "Watching apiserver"
	Mar 31 18:07:49 kubernetes-upgrade-101000 kubelet[14991]: I0331 18:07:49.405695   14991 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-101000 -n kubernetes-upgrade-101000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-101000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-scheduler-kubernetes-upgrade-101000 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-101000 describe pod kube-scheduler-kubernetes-upgrade-101000 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-101000 describe pod kube-scheduler-kubernetes-upgrade-101000 storage-provisioner: exit status 1 (52.484192ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-scheduler-kubernetes-upgrade-101000" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-101000 describe pod kube-scheduler-kubernetes-upgrade-101000 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-101000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-101000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-101000: (3.053487908s)
--- FAIL: TestKubernetesUpgrade (392.94s)

                                                
                                    
x
+
TestMissingContainerUpgrade (68.06s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2263857420.exe start -p missing-upgrade-752000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2263857420.exe start -p missing-upgrade-752000 --memory=2200 --driver=docker : exit status 78 (50.137347159s)

                                                
                                                
-- stdout --
	* [missing-upgrade-752000] minikube v1.9.1 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-752000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-752000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.79 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.31 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 9.82 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.40 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 28.85 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.64 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 50.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 55.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.51 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.24 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 76.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 89.04 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 100.93 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 112.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 117.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 128.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 132.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 136.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 146.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.43 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 169.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 180.96 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 191.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 205.21 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 210.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 226.82 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 231.04 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 243.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 257.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 261.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 277.54 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 282.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 294.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 309.10 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 311.96 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 318.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 327.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.82 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.07 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 359.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.49 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 381.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 389.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 399.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 410.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 419.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 430.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 441.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 451.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 471.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 481.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 506.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 521.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:00:49.971807801 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-752000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:01:08.990513906 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2263857420.exe start -p missing-upgrade-752000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2263857420.exe start -p missing-upgrade-752000 --memory=2200 --driver=docker : exit status 70 (4.092695371s)

                                                
                                                
-- stdout --
	* [missing-upgrade-752000] minikube v1.9.1 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-752000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-752000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2263857420.exe start -p missing-upgrade-752000 --memory=2200 --driver=docker 
E0331 11:01:19.608932    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:19.614834    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:19.625311    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:19.646011    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:19.686305    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:19.766925    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:19.927038    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:20.247952    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:20.888071    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:22.168797    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.2263857420.exe start -p missing-upgrade-752000 --memory=2200 --driver=docker : exit status 70 (4.042052653s)

                                                
                                                
-- stdout --
	* [missing-upgrade-752000] minikube v1.9.1 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-752000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-752000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-03-31 11:01:22.495276 -0700 PDT m=+2487.428645038
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-752000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-752000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d096fb7a46f6addbdb6618829170bdfee8667e319e74784ab075399c6234ef6",
	        "Created": "2023-03-31T18:00:58.332648361Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 176615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:00:58.571091473Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/5d096fb7a46f6addbdb6618829170bdfee8667e319e74784ab075399c6234ef6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d096fb7a46f6addbdb6618829170bdfee8667e319e74784ab075399c6234ef6/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d096fb7a46f6addbdb6618829170bdfee8667e319e74784ab075399c6234ef6/hosts",
	        "LogPath": "/var/lib/docker/containers/5d096fb7a46f6addbdb6618829170bdfee8667e319e74784ab075399c6234ef6/5d096fb7a46f6addbdb6618829170bdfee8667e319e74784ab075399c6234ef6-json.log",
	        "Name": "/missing-upgrade-752000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-752000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ff1537f4d4d482573ce95d6eca5d6f60c8d7131dddd0eab03cece64402389ade-init/diff:/var/lib/docker/overlay2/0f1f87dae13e3d6c1e26bf86f2861f8b91ce3789be7c4e92d8b155e5200ab693/diff:/var/lib/docker/overlay2/7f32cee17ad9c12a0f0db2baf7ca9024eedb273edaff0f4e86ef85e5700c84f5/diff:/var/lib/docker/overlay2/8c2d73bbfae80b6f94b7e962ae9854a96c2756a499cc5f64f263202d2497e917/diff:/var/lib/docker/overlay2/2d3aaf75c7cd24910d68b9e9de840dcda56c4d3b4724d7654e592f0f82eb633c/diff:/var/lib/docker/overlay2/58ea865d3f308ac251afe813f9b8886eaa5bfd34b8ec664284a86330e19db754/diff:/var/lib/docker/overlay2/d2299dc2840a2c6a1d6ff1f798df947bfb658aec896b24ed29e79ade04227db3/diff:/var/lib/docker/overlay2/fc4889ff6bbbd1cb558394386d643b61517255f9513b07f52f37a66637d960f2/diff:/var/lib/docker/overlay2/ed74bf189227b916ec42460d91816a91c1e6bf3c7667655cb2a88d0351d81549/diff:/var/lib/docker/overlay2/49482d68f5a4021d3fe4fb4f48411a3d52cdeae16c9d92931249c09954e4852c/diff:/var/lib/docker/overlay2/47f4ed
785727191a64e043e582a7d70b65899b9bbde289387ae3c661f286f90e/diff:/var/lib/docker/overlay2/ceb22616d74f3fb95ac5fca3f50b460c4a56f5156797be123a6ce27fd0c2a67f/diff:/var/lib/docker/overlay2/20e9689c79ca1cdc1688e38143f823a86af04057080a936b0d63c587026c6fe2/diff:/var/lib/docker/overlay2/3058c9134382eea8add3bff563eea094973c4def5d41ce15f932c10a126299a0/diff:/var/lib/docker/overlay2/29a0f131003172b131f3c25e8b88220209add31cbeef9e732c8e20871301efc2/diff:/var/lib/docker/overlay2/5f9292f06310de74dd01224f30ea82aa5bf6752eb3311569fe2eb57c5d1356a7/diff:/var/lib/docker/overlay2/51e19a56fc532e9bb18f1703bdcdd1c12eb6189d90643dbc807bc998d3896acc/diff:/var/lib/docker/overlay2/8711e8773b9ba238c5430e60197a3d7e50172f441405ffc46ae2372d688cf013/diff:/var/lib/docker/overlay2/c4cc9d2a44b270bc08b6071c5cf3b01153b21d8c58b43e092ae3d625ca2dca10/diff:/var/lib/docker/overlay2/ef3653e6a76e1e8038736a87465520c48ded8bb193b276a7686a2b738ec30395/diff:/var/lib/docker/overlay2/23aa646ae9e0cd40cebb809c52cfe2200ed57b7c32e264601e3d6341a630ce11/diff:/var/lib/d
ocker/overlay2/8be1bd5be2b47454afcc3c9311b96adf8427f9a33b09cd26cf0f190ee1775668/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff1537f4d4d482573ce95d6eca5d6f60c8d7131dddd0eab03cece64402389ade/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff1537f4d4d482573ce95d6eca5d6f60c8d7131dddd0eab03cece64402389ade/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff1537f4d4d482573ce95d6eca5d6f60c8d7131dddd0eab03cece64402389ade/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-752000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-752000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-752000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-752000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-752000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4a6de86c51c7cf9e458c9413c515525c7e7be2f4572acf4ff3d2291ddaa063f8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51682"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51683"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51684"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4a6de86c51c7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "de06d8a1d1e5a1c26f5fcec9907596f6e2d335a75a3b3cb5fcd5c30e728ce459",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "f23c0e940708fc9a8af0a2f49ad9e9e7316326c4546197ad1784141561b58a8f",
	                    "EndpointID": "de06d8a1d1e5a1c26f5fcec9907596f6e2d335a75a3b3cb5fcd5c30e728ce459",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-752000 -n missing-upgrade-752000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-752000 -n missing-upgrade-752000: exit status 6 (388.608024ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:01:22.934368   13508 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-752000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-752000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-752000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-752000
E0331 11:01:24.730800    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-752000: (2.39305406s)
--- FAIL: TestMissingContainerUpgrade (68.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1193273.exe start -p stopped-upgrade-369000 --memory=2200 --vm-driver=docker 
E0331 11:02:41.530614    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1193273.exe start -p stopped-upgrade-369000 --memory=2200 --vm-driver=docker : exit status 70 (45.16647796s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-369000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1498704329
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:02:32.015618972 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-369000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:02:51.182618789 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-369000", then "minikube start -p stopped-upgrade-369000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.13 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.98 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 13.87 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 23.37 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 45.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 59.90 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.01 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 87.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 94.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 113.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 121.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 151.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 164.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 178.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 222.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 250.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 278.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 291.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 324.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 332.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 365.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 391.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 419.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 434.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 443.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 503.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 519.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:02:51.182618789 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1193273.exe start -p stopped-upgrade-369000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1193273.exe start -p stopped-upgrade-369000 --memory=2200 --vm-driver=docker : exit status 70 (4.553785931s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-369000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3659408088
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-369000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1193273.exe start -p stopped-upgrade-369000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1193273.exe start -p stopped-upgrade-369000 --memory=2200 --vm-driver=docker : exit status 70 (4.482180705s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-369000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig2906712301
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-369000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (56.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (262.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0331 11:13:10.701653    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:20.941313    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:27.103639    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 11:13:41.420889    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m21.825121605s)

                                                
                                                
-- stdout --
	* [old-k8s-version-221000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-221000 in cluster old-k8s-version-221000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 11:13:06.366813   19978 out.go:296] Setting OutFile to fd 1 ...
	I0331 11:13:06.367006   19978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:13:06.367011   19978 out.go:309] Setting ErrFile to fd 2...
	I0331 11:13:06.367015   19978 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:13:06.367131   19978 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 11:13:06.368771   19978 out.go:303] Setting JSON to false
	I0331 11:13:06.389972   19978 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4354,"bootTime":1680282032,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 11:13:06.390068   19978 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 11:13:06.432495   19978 out.go:177] * [old-k8s-version-221000] minikube v1.29.0 on Darwin 13.3
	I0331 11:13:06.453604   19978 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 11:13:06.453596   19978 notify.go:220] Checking for updates...
	I0331 11:13:06.474439   19978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:13:06.516558   19978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 11:13:06.558442   19978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 11:13:06.600300   19978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 11:13:06.621515   19978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 11:13:06.642875   19978 config.go:182] Loaded profile config "kubenet-346000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 11:13:06.642924   19978 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 11:13:06.707865   19978 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 11:13:06.707995   19978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:13:06.900584   19978 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-31 18:13:06.760534754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:13:06.943009   19978 out.go:177] * Using the docker driver based on user configuration
	I0331 11:13:06.963997   19978 start.go:295] selected driver: docker
	I0331 11:13:06.964008   19978 start.go:859] validating driver "docker" against <nil>
	I0331 11:13:06.964017   19978 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 11:13:06.967020   19978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:13:07.167703   19978 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-31 18:13:07.02650783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:13:07.167830   19978 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0331 11:13:07.168044   19978 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0331 11:13:07.189455   19978 out.go:177] * Using Docker Desktop driver with root privileges
	I0331 11:13:07.210222   19978 cni.go:84] Creating CNI manager for ""
	I0331 11:13:07.210248   19978 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 11:13:07.210254   19978 start_flags.go:319] config:
	{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:13:07.232271   19978 out.go:177] * Starting control plane node old-k8s-version-221000 in cluster old-k8s-version-221000
	I0331 11:13:07.290245   19978 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 11:13:07.311441   19978 out.go:177] * Pulling base image ...
	I0331 11:13:07.370316   19978 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 11:13:07.370335   19978 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 11:13:07.370430   19978 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0331 11:13:07.370453   19978 cache.go:57] Caching tarball of preloaded images
	I0331 11:13:07.370707   19978 preload.go:174] Found /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0331 11:13:07.370725   19978 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0331 11:13:07.371794   19978 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/config.json ...
	I0331 11:13:07.371947   19978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/config.json: {Name:mkb5455ac9a0b555d1ca59ddce285d04990a845f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:13:07.431063   19978 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon, skipping pull
	I0331 11:13:07.431088   19978 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in daemon, skipping load
	I0331 11:13:07.431110   19978 cache.go:193] Successfully downloaded all kic artifacts
	I0331 11:13:07.431167   19978 start.go:364] acquiring machines lock for old-k8s-version-221000: {Name:mkd3c9d5738895d94e9fe50102426daf0ea0e9c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 11:13:07.431328   19978 start.go:368] acquired machines lock for "old-k8s-version-221000" in 148.215µs
	I0331 11:13:07.431357   19978 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 11:13:07.431417   19978 start.go:125] createHost starting for "" (driver="docker")
	I0331 11:13:07.453178   19978 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0331 11:13:07.453409   19978 start.go:159] libmachine.API.Create for "old-k8s-version-221000" (driver="docker")
	I0331 11:13:07.453426   19978 client.go:168] LocalClient.Create starting
	I0331 11:13:07.453537   19978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem
	I0331 11:13:07.453580   19978 main.go:141] libmachine: Decoding PEM data...
	I0331 11:13:07.453598   19978 main.go:141] libmachine: Parsing certificate...
	I0331 11:13:07.453662   19978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem
	I0331 11:13:07.453686   19978 main.go:141] libmachine: Decoding PEM data...
	I0331 11:13:07.453702   19978 main.go:141] libmachine: Parsing certificate...
	I0331 11:13:07.454169   19978 cli_runner.go:164] Run: docker network inspect old-k8s-version-221000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0331 11:13:07.512466   19978 cli_runner.go:211] docker network inspect old-k8s-version-221000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0331 11:13:07.512573   19978 network_create.go:281] running [docker network inspect old-k8s-version-221000] to gather additional debugging logs...
	I0331 11:13:07.512596   19978 cli_runner.go:164] Run: docker network inspect old-k8s-version-221000
	W0331 11:13:07.571232   19978 cli_runner.go:211] docker network inspect old-k8s-version-221000 returned with exit code 1
	I0331 11:13:07.571262   19978 network_create.go:284] error running [docker network inspect old-k8s-version-221000]: docker network inspect old-k8s-version-221000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-221000
	I0331 11:13:07.571277   19978 network_create.go:286] output of [docker network inspect old-k8s-version-221000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-221000
	
	** /stderr **
	I0331 11:13:07.571364   19978 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0331 11:13:07.631434   19978 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0331 11:13:07.632850   19978 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0331 11:13:07.634184   19978 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0331 11:13:07.634536   19978 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e19200}
	I0331 11:13:07.634551   19978 network_create.go:123] attempt to create docker network old-k8s-version-221000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0331 11:13:07.634632   19978 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-221000 old-k8s-version-221000
	I0331 11:13:07.727702   19978 network_create.go:107] docker network old-k8s-version-221000 192.168.76.0/24 created
	I0331 11:13:07.727740   19978 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-221000" container
	I0331 11:13:07.727867   19978 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0331 11:13:07.787942   19978 cli_runner.go:164] Run: docker volume create old-k8s-version-221000 --label name.minikube.sigs.k8s.io=old-k8s-version-221000 --label created_by.minikube.sigs.k8s.io=true
	I0331 11:13:07.847367   19978 oci.go:103] Successfully created a docker volume old-k8s-version-221000
	I0331 11:13:07.847501   19978 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-221000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-221000 --entrypoint /usr/bin/test -v old-k8s-version-221000:/var gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -d /var/lib
	I0331 11:13:08.414346   19978 oci.go:107] Successfully prepared a docker volume old-k8s-version-221000
	I0331 11:13:08.414379   19978 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 11:13:08.414393   19978 kic.go:190] Starting extracting preloaded images to volume ...
	I0331 11:13:08.414502   19978 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-221000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -I lz4 -xf /preloaded.tar -C /extractDir
	I0331 11:13:14.536430   19978 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-221000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 -I lz4 -xf /preloaded.tar -C /extractDir: (6.122151883s)
	I0331 11:13:14.536462   19978 kic.go:199] duration metric: took 6.122373 seconds to extract preloaded images to volume
	I0331 11:13:14.536585   19978 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0331 11:13:14.729097   19978 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-221000 --name old-k8s-version-221000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-221000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-221000 --network old-k8s-version-221000 --ip 192.168.76.2 --volume old-k8s-version-221000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55
	I0331 11:13:15.112821   19978 cli_runner.go:164] Run: docker container inspect old-k8s-version-221000 --format={{.State.Running}}
	I0331 11:13:15.175801   19978 cli_runner.go:164] Run: docker container inspect old-k8s-version-221000 --format={{.State.Status}}
	I0331 11:13:15.245219   19978 cli_runner.go:164] Run: docker exec old-k8s-version-221000 stat /var/lib/dpkg/alternatives/iptables
	I0331 11:13:15.373092   19978 oci.go:144] the created container "old-k8s-version-221000" has a running status.
	I0331 11:13:15.373129   19978 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa...
	I0331 11:13:15.498250   19978 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0331 11:13:15.617525   19978 cli_runner.go:164] Run: docker container inspect old-k8s-version-221000 --format={{.State.Status}}
	I0331 11:13:15.682168   19978 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0331 11:13:15.682189   19978 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-221000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0331 11:13:15.800490   19978 cli_runner.go:164] Run: docker container inspect old-k8s-version-221000 --format={{.State.Status}}
	I0331 11:13:15.867192   19978 machine.go:88] provisioning docker machine ...
	I0331 11:13:15.867235   19978 ubuntu.go:169] provisioning hostname "old-k8s-version-221000"
	I0331 11:13:15.867336   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:15.931308   19978 main.go:141] libmachine: Using SSH client type: native
	I0331 11:13:15.931720   19978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53352 <nil> <nil>}
	I0331 11:13:15.931735   19978 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-221000 && echo "old-k8s-version-221000" | sudo tee /etc/hostname
	I0331 11:13:16.076684   19978 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-221000
	
	I0331 11:13:16.076785   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:16.137221   19978 main.go:141] libmachine: Using SSH client type: native
	I0331 11:13:16.137564   19978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53352 <nil> <nil>}
	I0331 11:13:16.137582   19978 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-221000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-221000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-221000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 11:13:16.269208   19978 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:13:16.269233   19978 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16144-2324/.minikube CaCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16144-2324/.minikube}
	I0331 11:13:16.269263   19978 ubuntu.go:177] setting up certificates
	I0331 11:13:16.269274   19978 provision.go:83] configureAuth start
	I0331 11:13:16.269365   19978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-221000
	I0331 11:13:16.332988   19978 provision.go:138] copyHostCerts
	I0331 11:13:16.333090   19978 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem, removing ...
	I0331 11:13:16.333098   19978 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 11:13:16.333203   19978 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem (1078 bytes)
	I0331 11:13:16.333402   19978 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem, removing ...
	I0331 11:13:16.333408   19978 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 11:13:16.333468   19978 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem (1123 bytes)
	I0331 11:13:16.333650   19978 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem, removing ...
	I0331 11:13:16.333656   19978 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 11:13:16.333714   19978 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem (1679 bytes)
	I0331 11:13:16.333840   19978 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-221000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-221000]
	I0331 11:13:16.463686   19978 provision.go:172] copyRemoteCerts
	I0331 11:13:16.463748   19978 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 11:13:16.463796   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:16.525258   19978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53352 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	I0331 11:13:16.619636   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 11:13:16.637372   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0331 11:13:16.655080   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0331 11:13:16.672941   19978 provision.go:86] duration metric: configureAuth took 403.671759ms
	I0331 11:13:16.672957   19978 ubuntu.go:193] setting minikube options for container-runtime
	I0331 11:13:16.673117   19978 config.go:182] Loaded profile config "old-k8s-version-221000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0331 11:13:16.673178   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:16.733974   19978 main.go:141] libmachine: Using SSH client type: native
	I0331 11:13:16.734320   19978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53352 <nil> <nil>}
	I0331 11:13:16.734333   19978 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 11:13:16.868597   19978 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0331 11:13:16.868612   19978 ubuntu.go:71] root file system type: overlay
	I0331 11:13:16.868703   19978 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 11:13:16.868809   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:16.929717   19978 main.go:141] libmachine: Using SSH client type: native
	I0331 11:13:16.930072   19978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53352 <nil> <nil>}
	I0331 11:13:16.930119   19978 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 11:13:17.074987   19978 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 11:13:17.075083   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:17.135312   19978 main.go:141] libmachine: Using SSH client type: native
	I0331 11:13:17.135649   19978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53352 <nil> <nil>}
	I0331 11:13:17.135661   19978 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 11:13:17.797400   19978 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-03-27 16:16:18.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-31 18:13:17.073278828 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0331 11:13:17.797429   19978 machine.go:91] provisioned docker machine in 1.93031044s
	I0331 11:13:17.797441   19978 client.go:171] LocalClient.Create took 10.344524886s
	I0331 11:13:17.797461   19978 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-221000" took 10.34456762s
	I0331 11:13:17.797523   19978 start.go:300] post-start starting for "old-k8s-version-221000" (driver="docker")
	I0331 11:13:17.797537   19978 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 11:13:17.797657   19978 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 11:13:17.797747   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:17.862681   19978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53352 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	I0331 11:13:17.958975   19978 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 11:13:17.962802   19978 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0331 11:13:17.962817   19978 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0331 11:13:17.962825   19978 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0331 11:13:17.962831   19978 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0331 11:13:17.962843   19978 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/addons for local assets ...
	I0331 11:13:17.962929   19978 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/files for local assets ...
	I0331 11:13:17.963109   19978 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> 28002.pem in /etc/ssl/certs
	I0331 11:13:17.963317   19978 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 11:13:17.971195   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:13:17.989376   19978 start.go:303] post-start completed in 191.847185ms
	I0331 11:13:17.997975   19978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-221000
	I0331 11:13:18.060143   19978 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/config.json ...
	I0331 11:13:18.060686   19978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 11:13:18.060745   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:18.121624   19978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53352 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	I0331 11:13:18.212878   19978 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0331 11:13:18.217690   19978 start.go:128] duration metric: createHost completed in 10.786793684s
	I0331 11:13:18.217712   19978 start.go:83] releasing machines lock for "old-k8s-version-221000", held for 10.786916615s
	I0331 11:13:18.217813   19978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-221000
	I0331 11:13:18.277678   19978 ssh_runner.go:195] Run: cat /version.json
	I0331 11:13:18.277699   19978 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 11:13:18.277749   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:18.277781   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:18.341358   19978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53352 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	I0331 11:13:18.341824   19978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53352 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	W0331 11:13:18.488961   19978 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	I0331 11:13:18.489043   19978 ssh_runner.go:195] Run: systemctl --version
	I0331 11:13:18.494470   19978 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0331 11:13:18.499762   19978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0331 11:13:18.520765   19978 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0331 11:13:18.520841   19978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0331 11:13:18.534941   19978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0331 11:13:18.543008   19978 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0331 11:13:18.543025   19978 start.go:481] detecting cgroup driver to use...
	I0331 11:13:18.543041   19978 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:13:18.543109   19978 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:13:18.556581   19978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0331 11:13:18.565531   19978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 11:13:18.574598   19978 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 11:13:18.574660   19978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 11:13:18.583762   19978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:13:18.593242   19978 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 11:13:18.603105   19978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:13:18.612723   19978 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 11:13:18.621917   19978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 11:13:18.632133   19978 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 11:13:18.640431   19978 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 11:13:18.648267   19978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:13:18.720459   19978 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 11:13:18.795434   19978 start.go:481] detecting cgroup driver to use...
	I0331 11:13:18.795456   19978 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:13:18.795524   19978 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 11:13:18.806055   19978 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0331 11:13:18.806127   19978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 11:13:18.818521   19978 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:13:18.833004   19978 ssh_runner.go:195] Run: which cri-dockerd
	I0331 11:13:18.837503   19978 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 11:13:18.846089   19978 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 11:13:18.860963   19978 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 11:13:18.937504   19978 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 11:13:19.027337   19978 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 11:13:19.027352   19978 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 11:13:19.041404   19978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:13:19.135152   19978 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:13:19.379720   19978 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:13:19.406874   19978 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:13:19.454909   19978 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.2 ...
	I0331 11:13:19.455066   19978 cli_runner.go:164] Run: docker exec -t old-k8s-version-221000 dig +short host.docker.internal
	I0331 11:13:19.576560   19978 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0331 11:13:19.576678   19978 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0331 11:13:19.580977   19978 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:13:19.592040   19978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:13:19.654029   19978 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 11:13:19.654111   19978 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:13:19.675697   19978 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0331 11:13:19.675711   19978 docker.go:645] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0331 11:13:19.675777   19978 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 11:13:19.683870   19978 ssh_runner.go:195] Run: which lz4
	I0331 11:13:19.688141   19978 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0331 11:13:19.692172   19978 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0331 11:13:19.692201   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0331 11:13:25.195563   19978 docker.go:603] Took 5.507782 seconds to copy over tarball
	I0331 11:13:25.195633   19978 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0331 11:13:27.476808   19978 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.281256092s)
	I0331 11:13:27.476825   19978 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0331 11:13:27.546246   19978 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 11:13:27.554645   19978 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0331 11:13:27.567735   19978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:13:27.636915   19978 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:13:28.160150   19978 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:13:28.180747   19978 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0331 11:13:28.180761   19978 docker.go:645] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0331 11:13:28.180770   19978 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0331 11:13:28.190132   19978 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:13:28.192667   19978 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0331 11:13:28.192816   19978 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0331 11:13:28.193282   19978 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:13:28.194760   19978 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:13:28.195274   19978 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:13:28.195778   19978 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:13:28.199139   19978 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:13:28.204417   19978 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:13:28.207759   19978 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error: No such image: registry.k8s.io/coredns:1.6.2
	I0331 11:13:28.208088   19978 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error: No such image: registry.k8s.io/pause:3.1
	I0331 11:13:28.209314   19978 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:13:28.209422   19978 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:13:28.209963   19978 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error: No such image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:13:28.210200   19978 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:13:28.211571   19978 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:13:29.360244   19978 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:13:29.381794   19978 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0331 11:13:29.381830   19978 docker.go:313] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:13:29.381897   19978 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:13:29.402983   19978 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0331 11:13:29.498061   19978 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0331 11:13:29.519511   19978 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0331 11:13:29.519533   19978 docker.go:313] Removing image: registry.k8s.io/coredns:1.6.2
	I0331 11:13:29.519582   19978 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0331 11:13:29.541807   19978 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0331 11:13:29.634373   19978 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0331 11:13:29.656917   19978 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0331 11:13:29.656944   19978 docker.go:313] Removing image: registry.k8s.io/pause:3.1
	I0331 11:13:29.657002   19978 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0331 11:13:29.678742   19978 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0331 11:13:30.046458   19978 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:13:30.066665   19978 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0331 11:13:30.066699   19978 docker.go:313] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:13:30.066757   19978 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:13:30.090444   19978 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0331 11:13:30.191046   19978 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:13:30.355795   19978 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0331 11:13:30.376480   19978 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0331 11:13:30.376509   19978 docker.go:313] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:13:30.376563   19978 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0331 11:13:30.397203   19978 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0331 11:13:30.653506   19978 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:13:30.675335   19978 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0331 11:13:30.675360   19978 docker.go:313] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:13:30.675421   19978 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:13:30.698756   19978 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0331 11:13:30.962395   19978 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:13:30.983368   19978 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0331 11:13:30.983396   19978 docker.go:313] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:13:30.983458   19978 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:13:31.015988   19978 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0331 11:13:31.016049   19978 cache_images.go:92] LoadImages completed in 2.835406854s
	W0331 11:13:31.016154   19978 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0331 11:13:31.016243   19978 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 11:13:31.047295   19978 cni.go:84] Creating CNI manager for ""
	I0331 11:13:31.047318   19978 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 11:13:31.047337   19978 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 11:13:31.047352   19978 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-221000 NodeName:old-k8s-version-221000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 11:13:31.047459   19978 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-221000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-221000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 11:13:31.047530   19978 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-221000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 11:13:31.047593   19978 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0331 11:13:31.055751   19978 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 11:13:31.055808   19978 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 11:13:31.063318   19978 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0331 11:13:31.076631   19978 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0331 11:13:31.092394   19978 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0331 11:13:31.109787   19978 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0331 11:13:31.116134   19978 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:13:31.129094   19978 certs.go:56] Setting up /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000 for IP: 192.168.76.2
	I0331 11:13:31.129124   19978 certs.go:186] acquiring lock for shared ca certs: {Name:mk1ddc355573fb6044e73c93dd0e9bf4bae32052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:13:31.129350   19978 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key
	I0331 11:13:31.129458   19978 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key
	I0331 11:13:31.129524   19978 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/client.key
	I0331 11:13:31.129554   19978 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/client.crt with IP's: []
	I0331 11:13:31.301926   19978 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/client.crt ...
	I0331 11:13:31.301948   19978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/client.crt: {Name:mk79a65773f2eb0c1d433a6848754a5ed4c59021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:13:31.330796   19978 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/client.key ...
	I0331 11:13:31.330818   19978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/client.key: {Name:mke63a0164fa0447771c190d589dbe30f6242f74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:13:31.352030   19978 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.key.31bdca25
	I0331 11:13:31.352103   19978 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0331 11:13:31.412205   19978 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.crt.31bdca25 ...
	I0331 11:13:31.412224   19978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.crt.31bdca25: {Name:mk38b201eccce6b3d14a3b6c0a5db868e304c7d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:13:31.412527   19978 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.key.31bdca25 ...
	I0331 11:13:31.412536   19978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.key.31bdca25: {Name:mk9a3f172f130f13f764a93ea61ccf548d0ab6b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:13:31.412748   19978 certs.go:333] copying /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.crt
	I0331 11:13:31.412924   19978 certs.go:337] copying /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.key
	I0331 11:13:31.413094   19978 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.key
	I0331 11:13:31.413111   19978 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.crt with IP's: []
	I0331 11:13:31.575834   19978 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.crt ...
	I0331 11:13:31.575849   19978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.crt: {Name:mka6d12b78bc0da19b38546fca36e937d1506199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:13:31.576160   19978 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.key ...
	I0331 11:13:31.576168   19978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.key: {Name:mkbf1a71690781dee25fa1b26ff4804210864793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:13:31.576562   19978 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem (1338 bytes)
	W0331 11:13:31.576607   19978 certs.go:397] ignoring /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800_empty.pem, impossibly tiny 0 bytes
	I0331 11:13:31.576643   19978 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem (1679 bytes)
	I0331 11:13:31.576685   19978 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem (1078 bytes)
	I0331 11:13:31.576717   19978 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem (1123 bytes)
	I0331 11:13:31.576754   19978 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem (1679 bytes)
	I0331 11:13:31.576835   19978 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:13:31.577307   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 11:13:31.601956   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0331 11:13:31.624680   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 11:13:31.647272   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 11:13:31.664947   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 11:13:31.683044   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0331 11:13:31.708272   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 11:13:31.728908   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 11:13:31.749870   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /usr/share/ca-certificates/28002.pem (1708 bytes)
	I0331 11:13:31.767822   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 11:13:31.786381   19978 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem --> /usr/share/ca-certificates/2800.pem (1338 bytes)
	I0331 11:13:31.817569   19978 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 11:13:31.833503   19978 ssh_runner.go:195] Run: openssl version
	I0331 11:13:31.839498   19978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 11:13:31.847710   19978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:13:31.851811   19978 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:13:31.851854   19978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:13:31.857398   19978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 11:13:31.865782   19978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2800.pem && ln -fs /usr/share/ca-certificates/2800.pem /etc/ssl/certs/2800.pem"
	I0331 11:13:31.874089   19978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2800.pem
	I0331 11:13:31.878301   19978 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/2800.pem
	I0331 11:13:31.878345   19978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2800.pem
	I0331 11:13:31.884009   19978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2800.pem /etc/ssl/certs/51391683.0"
	I0331 11:13:31.893738   19978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28002.pem && ln -fs /usr/share/ca-certificates/28002.pem /etc/ssl/certs/28002.pem"
	I0331 11:13:31.904832   19978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28002.pem
	I0331 11:13:31.910244   19978 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/28002.pem
	I0331 11:13:31.910321   19978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28002.pem
	I0331 11:13:31.916767   19978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28002.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 11:13:31.928495   19978 kubeadm.go:401] StartCluster: {Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:13:31.928614   19978 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:13:31.951156   19978 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 11:13:31.959289   19978 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:13:31.966801   19978 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:13:31.966857   19978 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:13:31.974355   19978 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:13:31.974383   19978 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:13:32.045003   19978 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0331 11:13:32.045043   19978 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:13:32.241796   19978 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:13:32.241886   19978 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:13:32.241952   19978 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:13:32.412546   19978 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:13:32.413256   19978 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:13:32.420741   19978 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0331 11:13:32.489688   19978 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:13:32.610916   19978 out.go:204]   - Generating certificates and keys ...
	I0331 11:13:32.611005   19978 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:13:32.611070   19978 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:13:32.763241   19978 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0331 11:13:32.877876   19978 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0331 11:13:33.074958   19978 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0331 11:13:33.139253   19978 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0331 11:13:33.289102   19978 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0331 11:13:33.289206   19978 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-221000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0331 11:13:33.385744   19978 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0331 11:13:33.385848   19978 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-221000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0331 11:13:33.483327   19978 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0331 11:13:33.609083   19978 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0331 11:13:33.689936   19978 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0331 11:13:33.690145   19978 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:13:33.806239   19978 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:13:33.872452   19978 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:13:33.922647   19978 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:13:34.101524   19978 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:13:34.102032   19978 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:13:34.123666   19978 out.go:204]   - Booting up control plane ...
	I0331 11:13:34.123765   19978 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:13:34.123831   19978 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:13:34.123893   19978 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:13:34.123971   19978 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:13:34.124131   19978 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:14:14.108462   19978 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 11:14:14.109268   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:14:14.109483   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:14:19.109928   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:14:19.110104   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:14:29.110070   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:14:29.110223   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:14:49.109671   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:14:49.109894   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:15:29.108400   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:15:29.108604   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:15:29.108612   19978 kubeadm.go:322] 
	I0331 11:15:29.108648   19978 kubeadm.go:322] Unfortunately, an error has occurred:
	I0331 11:15:29.108687   19978 kubeadm.go:322] 	timed out waiting for the condition
	I0331 11:15:29.108699   19978 kubeadm.go:322] 
	I0331 11:15:29.108727   19978 kubeadm.go:322] This error is likely caused by:
	I0331 11:15:29.108762   19978 kubeadm.go:322] 	- The kubelet is not running
	I0331 11:15:29.108871   19978 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 11:15:29.108903   19978 kubeadm.go:322] 
	I0331 11:15:29.109022   19978 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 11:15:29.109075   19978 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0331 11:15:29.109102   19978 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0331 11:15:29.109106   19978 kubeadm.go:322] 
	I0331 11:15:29.109237   19978 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 11:15:29.109320   19978 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0331 11:15:29.109393   19978 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0331 11:15:29.109433   19978 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0331 11:15:29.109496   19978 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0331 11:15:29.109530   19978 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0331 11:15:29.112728   19978 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 11:15:29.112840   19978 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 11:15:29.112974   19978 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
	I0331 11:15:29.113104   19978 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:15:29.113193   19978 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 11:15:29.113251   19978 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0331 11:15:29.113464   19978 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-221000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-221000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-221000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-221000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0331 11:15:29.113513   19978 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0331 11:15:29.528497   19978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:15:29.538728   19978 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:15:29.538784   19978 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:15:29.546737   19978 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:15:29.546786   19978 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:15:29.596006   19978 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0331 11:15:29.596056   19978 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:15:29.766988   19978 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:15:29.767078   19978 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:15:29.767201   19978 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:15:29.920754   19978 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:15:29.921399   19978 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:15:29.929353   19978 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0331 11:15:30.011024   19978 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:15:30.068267   19978 out.go:204]   - Generating certificates and keys ...
	I0331 11:15:30.068363   19978 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:15:30.068472   19978 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:15:30.068582   19978 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0331 11:15:30.068671   19978 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0331 11:15:30.068748   19978 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0331 11:15:30.068793   19978 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0331 11:15:30.068838   19978 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0331 11:15:30.068887   19978 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0331 11:15:30.068957   19978 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0331 11:15:30.069009   19978 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0331 11:15:30.069037   19978 kubeadm.go:322] [certs] Using the existing "sa" key
	I0331 11:15:30.069084   19978 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:15:30.163457   19978 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:15:30.285277   19978 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:15:30.405130   19978 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:15:30.558077   19978 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:15:30.559207   19978 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:15:30.580868   19978 out.go:204]   - Booting up control plane ...
	I0331 11:15:30.581055   19978 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:15:30.581229   19978 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:15:30.581359   19978 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:15:30.581496   19978 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:15:30.581812   19978 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:16:10.569607   19978 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 11:16:10.570289   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:16:10.570550   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:16:15.579280   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:16:15.579459   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:16:25.591018   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:16:25.591230   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:16:45.598653   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:16:45.598812   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:17:25.600416   19978 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:17:25.600566   19978 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:17:25.600580   19978 kubeadm.go:322] 
	I0331 11:17:25.600622   19978 kubeadm.go:322] Unfortunately, an error has occurred:
	I0331 11:17:25.600656   19978 kubeadm.go:322] 	timed out waiting for the condition
	I0331 11:17:25.600660   19978 kubeadm.go:322] 
	I0331 11:17:25.600688   19978 kubeadm.go:322] This error is likely caused by:
	I0331 11:17:25.600721   19978 kubeadm.go:322] 	- The kubelet is not running
	I0331 11:17:25.600813   19978 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 11:17:25.600819   19978 kubeadm.go:322] 
	I0331 11:17:25.600908   19978 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 11:17:25.600940   19978 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0331 11:17:25.600973   19978 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0331 11:17:25.600981   19978 kubeadm.go:322] 
	I0331 11:17:25.601075   19978 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 11:17:25.601154   19978 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0331 11:17:25.601227   19978 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0331 11:17:25.601270   19978 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0331 11:17:25.601339   19978 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0331 11:17:25.601370   19978 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0331 11:17:25.604396   19978 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 11:17:25.604466   19978 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 11:17:25.604566   19978 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
	I0331 11:17:25.604657   19978 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:17:25.604743   19978 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 11:17:25.604803   19978 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0331 11:17:25.604828   19978 kubeadm.go:403] StartCluster complete in 3m53.654846206s
	I0331 11:17:25.604921   19978 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:17:25.624430   19978 logs.go:277] 0 containers: []
	W0331 11:17:25.624444   19978 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:17:25.624513   19978 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:17:25.644797   19978 logs.go:277] 0 containers: []
	W0331 11:17:25.644810   19978 logs.go:279] No container was found matching "etcd"
	I0331 11:17:25.644883   19978 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:17:25.663538   19978 logs.go:277] 0 containers: []
	W0331 11:17:25.663552   19978 logs.go:279] No container was found matching "coredns"
	I0331 11:17:25.663619   19978 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:17:25.682601   19978 logs.go:277] 0 containers: []
	W0331 11:17:25.682614   19978 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:17:25.682689   19978 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:17:25.702364   19978 logs.go:277] 0 containers: []
	W0331 11:17:25.702376   19978 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:17:25.702447   19978 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:17:25.721447   19978 logs.go:277] 0 containers: []
	W0331 11:17:25.721460   19978 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:17:25.721528   19978 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:17:25.742384   19978 logs.go:277] 0 containers: []
	W0331 11:17:25.742398   19978 logs.go:279] No container was found matching "kindnet"
	I0331 11:17:25.742405   19978 logs.go:123] Gathering logs for kubelet ...
	I0331 11:17:25.742416   19978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:17:25.780658   19978 logs.go:123] Gathering logs for dmesg ...
	I0331 11:17:25.780690   19978 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:17:25.793929   19978 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:17:25.793943   19978 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:17:25.850793   19978 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:17:25.850805   19978 logs.go:123] Gathering logs for Docker ...
	I0331 11:17:25.850812   19978 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:17:25.882443   19978 logs.go:123] Gathering logs for container status ...
	I0331 11:17:25.882461   19978 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:17:27.931585   19978 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04918268s)
	W0331 11:17:27.931720   19978 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0331 11:17:27.931739   19978 out.go:239] * 
	* 
	W0331 11:17:27.931848   19978 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 11:17:27.931861   19978 out.go:239] * 
	* 
	W0331 11:17:27.932481   19978 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0331 11:17:28.020341   19978 out.go:177] 
	W0331 11:17:28.083390   19978 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 11:17:28.083517   19978 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0331 11:17:28.083624   19978 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0331 11:17:28.125074   19978 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-221000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-221000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c",
	        "Created": "2023-03-31T18:13:14.794492262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:13:15.104874322Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hosts",
	        "LogPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c-json.log",
	        "Name": "/old-k8s-version-221000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-221000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-221000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-221000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-221000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-221000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1435af2e140a84e1562075c16fcbb65a3e0ccdee2aaf0c14ae6d1b2df689a153",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53352"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53353"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53351"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1435af2e140a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-221000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0bb0a05e1404",
	                        "old-k8s-version-221000"
	                    ],
	                    "NetworkID": "1369008204ce2a861d531490c08c0f4f11e7797b90e56bf4d65905b433bee06b",
	                    "EndpointID": "f2d42d9a2ab7f90bacf6faa8fc853efa0d48bf0f1b13814af4a6d84f9440be5a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 6 (409.573151ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:17:28.672531   21244 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-221000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (262.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-221000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-221000 create -f testdata/busybox.yaml: exit status 1 (37.483396ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-221000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-221000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-221000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-221000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c",
	        "Created": "2023-03-31T18:13:14.794492262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:13:15.104874322Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hosts",
	        "LogPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c-json.log",
	        "Name": "/old-k8s-version-221000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-221000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-221000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-221000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-221000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-221000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1435af2e140a84e1562075c16fcbb65a3e0ccdee2aaf0c14ae6d1b2df689a153",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53352"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53353"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53351"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1435af2e140a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-221000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0bb0a05e1404",
	                        "old-k8s-version-221000"
	                    ],
	                    "NetworkID": "1369008204ce2a861d531490c08c0f4f11e7797b90e56bf4d65905b433bee06b",
	                    "EndpointID": "f2d42d9a2ab7f90bacf6faa8fc853efa0d48bf0f1b13814af4a6d84f9440be5a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 6 (405.30171ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:17:29.178798   21257 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-221000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-221000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-221000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c",
	        "Created": "2023-03-31T18:13:14.794492262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:13:15.104874322Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hosts",
	        "LogPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c-json.log",
	        "Name": "/old-k8s-version-221000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-221000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-221000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-221000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-221000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-221000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1435af2e140a84e1562075c16fcbb65a3e0ccdee2aaf0c14ae6d1b2df689a153",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53352"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53353"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53351"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1435af2e140a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-221000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0bb0a05e1404",
	                        "old-k8s-version-221000"
	                    ],
	                    "NetworkID": "1369008204ce2a861d531490c08c0f4f11e7797b90e56bf4d65905b433bee06b",
	                    "EndpointID": "f2d42d9a2ab7f90bacf6faa8fc853efa0d48bf0f1b13814af4a6d84f9440be5a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 6 (404.081683ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:17:29.646281   21269 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-221000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (104.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-221000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0331 11:17:29.796247    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:29.801476    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:29.811572    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:29.831885    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:29.873953    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:29.954866    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:29.979778    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:30.115062    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:30.213365    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:17:30.435833    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:31.078007    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:32.358969    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:34.919655    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:40.039979    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:42.642857    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:17:50.281803    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:17:50.459145    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:18:00.479467    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:18:10.169110    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 11:18:10.761396    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:18:19.807914    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:18:27.122523    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 11:18:28.165226    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:18:31.419386    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:18:44.928403    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:44.934049    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:44.944399    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:44.966484    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:45.006756    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:45.087080    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:45.248341    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:45.568468    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:46.208722    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:47.489015    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:50.049996    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:18:51.719679    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:18:52.131629    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:18:55.170948    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:19:05.412683    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:19:08.734102    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-221000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m43.986220796s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-221000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-221000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-221000 describe deploy/metrics-server -n kube-system: exit status 1 (36.217706ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-221000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-221000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-221000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-221000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c",
	        "Created": "2023-03-31T18:13:14.794492262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 278398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:13:15.104874322Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hosts",
	        "LogPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c-json.log",
	        "Name": "/old-k8s-version-221000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-221000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-221000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-221000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-221000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-221000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1435af2e140a84e1562075c16fcbb65a3e0ccdee2aaf0c14ae6d1b2df689a153",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53352"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53353"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53354"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53351"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1435af2e140a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-221000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0bb0a05e1404",
	                        "old-k8s-version-221000"
	                    ],
	                    "NetworkID": "1369008204ce2a861d531490c08c0f4f11e7797b90e56bf4d65905b433bee06b",
	                    "EndpointID": "f2d42d9a2ab7f90bacf6faa8fc853efa0d48bf0f1b13814af4a6d84f9440be5a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 6 (424.309271ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:19:14.151272   21390 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-221000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-221000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (104.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0331 11:19:25.893741    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:19:30.531537    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 11:19:36.421908    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
E0331 11:19:41.877539    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:19:53.335616    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:20:06.852042    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:20:09.557423    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:20:13.636925    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:20:35.949396    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m24.048189067s)

                                                
                                                
-- stdout --
	* [old-k8s-version-221000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-221000 in cluster old-k8s-version-221000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-221000" ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 11:19:16.188167   21423 out.go:296] Setting OutFile to fd 1 ...
	I0331 11:19:16.188323   21423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:19:16.188328   21423 out.go:309] Setting ErrFile to fd 2...
	I0331 11:19:16.188332   21423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:19:16.188443   21423 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 11:19:16.189782   21423 out.go:303] Setting JSON to false
	I0331 11:19:16.209881   21423 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4724,"bootTime":1680282032,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 11:19:16.209965   21423 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 11:19:16.231894   21423 out.go:177] * [old-k8s-version-221000] minikube v1.29.0 on Darwin 13.3
	I0331 11:19:16.252642   21423 notify.go:220] Checking for updates...
	I0331 11:19:16.273744   21423 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 11:19:16.295914   21423 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:19:16.316645   21423 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 11:19:16.337890   21423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 11:19:16.358893   21423 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 11:19:16.379907   21423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 11:19:16.401268   21423 config.go:182] Loaded profile config "old-k8s-version-221000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0331 11:19:16.423754   21423 out.go:177] * Kubernetes 1.26.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.3
	I0331 11:19:16.444635   21423 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 11:19:16.510690   21423 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 11:19:16.510808   21423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:19:16.698303   21423 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-31 18:19:16.564573497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:19:16.719999   21423 out.go:177] * Using the docker driver based on existing profile
	I0331 11:19:16.740729   21423 start.go:295] selected driver: docker
	I0331 11:19:16.740774   21423 start.go:859] validating driver "docker" against &{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:19:16.740897   21423 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 11:19:16.744910   21423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:19:16.932561   21423 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-31 18:19:16.798538018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:19:16.932717   21423 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0331 11:19:16.932737   21423 cni.go:84] Creating CNI manager for ""
	I0331 11:19:16.932749   21423 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 11:19:16.932764   21423 start_flags.go:319] config:
	{Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:19:16.976293   21423 out.go:177] * Starting control plane node old-k8s-version-221000 in cluster old-k8s-version-221000
	I0331 11:19:16.997410   21423 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 11:19:17.019183   21423 out.go:177] * Pulling base image ...
	I0331 11:19:17.061469   21423 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 11:19:17.061485   21423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 11:19:17.061602   21423 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0331 11:19:17.061621   21423 cache.go:57] Caching tarball of preloaded images
	I0331 11:19:17.061843   21423 preload.go:174] Found /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0331 11:19:17.061860   21423 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0331 11:19:17.062842   21423 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/config.json ...
	I0331 11:19:17.122140   21423 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon, skipping pull
	I0331 11:19:17.122167   21423 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in daemon, skipping load
	I0331 11:19:17.122191   21423 cache.go:193] Successfully downloaded all kic artifacts
	I0331 11:19:17.122231   21423 start.go:364] acquiring machines lock for old-k8s-version-221000: {Name:mkd3c9d5738895d94e9fe50102426daf0ea0e9c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 11:19:17.122323   21423 start.go:368] acquired machines lock for "old-k8s-version-221000" in 71.839µs
	I0331 11:19:17.122350   21423 start.go:96] Skipping create...Using existing machine configuration
	I0331 11:19:17.122359   21423 fix.go:55] fixHost starting: 
	I0331 11:19:17.122597   21423 cli_runner.go:164] Run: docker container inspect old-k8s-version-221000 --format={{.State.Status}}
	I0331 11:19:17.182229   21423 fix.go:103] recreateIfNeeded on old-k8s-version-221000: state=Stopped err=<nil>
	W0331 11:19:17.182260   21423 fix.go:129] unexpected machine state, will restart: <nil>
	I0331 11:19:17.225645   21423 out.go:177] * Restarting existing docker container for "old-k8s-version-221000" ...
	I0331 11:19:17.247157   21423 cli_runner.go:164] Run: docker start old-k8s-version-221000
	I0331 11:19:17.587086   21423 cli_runner.go:164] Run: docker container inspect old-k8s-version-221000 --format={{.State.Status}}
	I0331 11:19:17.650902   21423 kic.go:426] container "old-k8s-version-221000" state is running.
	I0331 11:19:17.651457   21423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-221000
	I0331 11:19:17.718094   21423 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/config.json ...
	I0331 11:19:17.718496   21423 machine.go:88] provisioning docker machine ...
	I0331 11:19:17.718524   21423 ubuntu.go:169] provisioning hostname "old-k8s-version-221000"
	I0331 11:19:17.718596   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:17.784026   21423 main.go:141] libmachine: Using SSH client type: native
	I0331 11:19:17.784478   21423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53597 <nil> <nil>}
	I0331 11:19:17.784490   21423 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-221000 && echo "old-k8s-version-221000" | sudo tee /etc/hostname
	I0331 11:19:17.943151   21423 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-221000
	
	I0331 11:19:17.943244   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:18.041228   21423 main.go:141] libmachine: Using SSH client type: native
	I0331 11:19:18.041597   21423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53597 <nil> <nil>}
	I0331 11:19:18.041614   21423 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-221000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-221000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-221000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 11:19:18.176705   21423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:19:18.176733   21423 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16144-2324/.minikube CaCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16144-2324/.minikube}
	I0331 11:19:18.176765   21423 ubuntu.go:177] setting up certificates
	I0331 11:19:18.176782   21423 provision.go:83] configureAuth start
	I0331 11:19:18.176860   21423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-221000
	I0331 11:19:18.237121   21423 provision.go:138] copyHostCerts
	I0331 11:19:18.237215   21423 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem, removing ...
	I0331 11:19:18.237226   21423 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 11:19:18.237336   21423 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem (1078 bytes)
	I0331 11:19:18.237540   21423 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem, removing ...
	I0331 11:19:18.237548   21423 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 11:19:18.237607   21423 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem (1123 bytes)
	I0331 11:19:18.237757   21423 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem, removing ...
	I0331 11:19:18.237762   21423 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 11:19:18.237823   21423 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem (1679 bytes)
	I0331 11:19:18.237942   21423 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-221000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-221000]
	I0331 11:19:18.718614   21423 provision.go:172] copyRemoteCerts
	I0331 11:19:18.718689   21423 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 11:19:18.718744   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:18.779732   21423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53597 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	I0331 11:19:18.876449   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 11:19:18.893857   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0331 11:19:18.911659   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0331 11:19:18.929567   21423 provision.go:86] duration metric: configureAuth took 752.809968ms
	I0331 11:19:18.929581   21423 ubuntu.go:193] setting minikube options for container-runtime
	I0331 11:19:18.929736   21423 config.go:182] Loaded profile config "old-k8s-version-221000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0331 11:19:18.929800   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:18.992121   21423 main.go:141] libmachine: Using SSH client type: native
	I0331 11:19:18.992467   21423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53597 <nil> <nil>}
	I0331 11:19:18.992481   21423 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 11:19:19.126648   21423 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0331 11:19:19.126663   21423 ubuntu.go:71] root file system type: overlay
	I0331 11:19:19.126738   21423 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 11:19:19.126819   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:19.187804   21423 main.go:141] libmachine: Using SSH client type: native
	I0331 11:19:19.188153   21423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53597 <nil> <nil>}
	I0331 11:19:19.188201   21423 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 11:19:19.335313   21423 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 11:19:19.335421   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:19.396136   21423 main.go:141] libmachine: Using SSH client type: native
	I0331 11:19:19.396479   21423 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53597 <nil> <nil>}
	I0331 11:19:19.396493   21423 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 11:19:19.535865   21423 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:19:19.535895   21423 machine.go:91] provisioned docker machine in 1.81748186s
	I0331 11:19:19.535906   21423 start.go:300] post-start starting for "old-k8s-version-221000" (driver="docker")
	I0331 11:19:19.535916   21423 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 11:19:19.536000   21423 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 11:19:19.536061   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:19.597967   21423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53597 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	I0331 11:19:19.694525   21423 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 11:19:19.698357   21423 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0331 11:19:19.698372   21423 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0331 11:19:19.698384   21423 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0331 11:19:19.698392   21423 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0331 11:19:19.698400   21423 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/addons for local assets ...
	I0331 11:19:19.698498   21423 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/files for local assets ...
	I0331 11:19:19.698656   21423 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> 28002.pem in /etc/ssl/certs
	I0331 11:19:19.698823   21423 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 11:19:19.706594   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:19:19.724684   21423 start.go:303] post-start completed in 188.76665ms
	I0331 11:19:19.724765   21423 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 11:19:19.724826   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:19.784923   21423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53597 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	I0331 11:19:19.878122   21423 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0331 11:19:19.882768   21423 fix.go:57] fixHost completed within 2.760542965s
	I0331 11:19:19.882787   21423 start.go:83] releasing machines lock for "old-k8s-version-221000", held for 2.760594652s
	I0331 11:19:19.882869   21423 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-221000
	I0331 11:19:19.944669   21423 ssh_runner.go:195] Run: cat /version.json
	I0331 11:19:19.944678   21423 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 11:19:19.944736   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:19.944762   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:20.008393   21423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53597 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	I0331 11:19:20.008394   21423 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53597 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/old-k8s-version-221000/id_rsa Username:docker}
	W0331 11:19:20.154992   21423 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	I0331 11:19:20.155071   21423 ssh_runner.go:195] Run: systemctl --version
	I0331 11:19:20.160281   21423 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0331 11:19:20.165123   21423 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0331 11:19:20.165179   21423 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0331 11:19:20.173162   21423 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0331 11:19:20.180807   21423 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0331 11:19:20.180825   21423 start.go:481] detecting cgroup driver to use...
	I0331 11:19:20.180836   21423 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:19:20.180910   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:19:20.194362   21423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0331 11:19:20.203296   21423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 11:19:20.211902   21423 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 11:19:20.211967   21423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 11:19:20.220665   21423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:19:20.229544   21423 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 11:19:20.238212   21423 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:19:20.246858   21423 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 11:19:20.254996   21423 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 11:19:20.263624   21423 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 11:19:20.270834   21423 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 11:19:20.278090   21423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:19:20.349223   21423 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 11:19:20.425308   21423 start.go:481] detecting cgroup driver to use...
	I0331 11:19:20.425328   21423 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:19:20.425391   21423 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 11:19:20.435769   21423 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0331 11:19:20.435833   21423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 11:19:20.446277   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:19:20.460290   21423 ssh_runner.go:195] Run: which cri-dockerd
	I0331 11:19:20.464583   21423 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 11:19:20.472839   21423 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 11:19:20.504849   21423 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 11:19:20.571187   21423 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 11:19:20.656882   21423 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 11:19:20.656901   21423 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 11:19:20.670375   21423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:19:20.757248   21423 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:19:20.982436   21423 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:19:21.009580   21423 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:19:21.079359   21423 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.2 ...
	I0331 11:19:21.079537   21423 cli_runner.go:164] Run: docker exec -t old-k8s-version-221000 dig +short host.docker.internal
	I0331 11:19:21.216990   21423 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0331 11:19:21.217115   21423 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0331 11:19:21.221665   21423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:19:21.231820   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:21.293146   21423 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 11:19:21.293227   21423 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:19:21.313766   21423 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0331 11:19:21.313785   21423 docker.go:645] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0331 11:19:21.313859   21423 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 11:19:21.321562   21423 ssh_runner.go:195] Run: which lz4
	I0331 11:19:21.325398   21423 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0331 11:19:21.329208   21423 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0331 11:19:21.329234   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0331 11:19:26.655653   21423 docker.go:603] Took 5.330589 seconds to copy over tarball
	I0331 11:19:26.655731   21423 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0331 11:19:29.002513   21423 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.346882825s)
	I0331 11:19:29.002528   21423 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0331 11:19:29.066326   21423 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0331 11:19:29.074577   21423 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0331 11:19:29.088737   21423 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:19:29.158153   21423 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:19:29.706168   21423 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:19:29.728013   21423 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0331 11:19:29.728035   21423 docker.go:645] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0331 11:19:29.728044   21423 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0331 11:19:29.740899   21423 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:19:29.741042   21423 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0331 11:19:29.742007   21423 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:19:29.743520   21423 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:19:29.743974   21423 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0331 11:19:29.745616   21423 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:19:29.747639   21423 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:19:29.748118   21423 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:19:29.753568   21423 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error: No such image: registry.k8s.io/coredns:1.6.2
	I0331 11:19:29.754898   21423 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:19:29.756963   21423 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:19:29.758099   21423 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error: No such image: registry.k8s.io/pause:3.1
	I0331 11:19:29.758716   21423 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:19:29.759061   21423 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:19:29.759747   21423 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error: No such image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:19:29.762304   21423 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:19:30.915009   21423 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0331 11:19:30.936754   21423 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0331 11:19:30.936799   21423 docker.go:313] Removing image: registry.k8s.io/coredns:1.6.2
	I0331 11:19:30.936855   21423 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0331 11:19:30.957703   21423 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0331 11:19:31.070058   21423 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:19:31.092411   21423 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0331 11:19:31.092452   21423 docker.go:313] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:19:31.092516   21423 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0331 11:19:31.115055   21423 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0331 11:19:31.289965   21423 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:19:31.310823   21423 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0331 11:19:31.310857   21423 docker.go:313] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:19:31.310914   21423 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0331 11:19:31.328349   21423 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0331 11:19:31.331918   21423 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0331 11:19:31.350560   21423 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0331 11:19:31.350592   21423 docker.go:313] Removing image: registry.k8s.io/pause:3.1
	I0331 11:19:31.350650   21423 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0331 11:19:31.372821   21423 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0331 11:19:31.600849   21423 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:19:31.621231   21423 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0331 11:19:31.621261   21423 docker.go:313] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:19:31.621322   21423 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0331 11:19:31.643064   21423 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0331 11:19:31.903594   21423 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:19:31.925321   21423 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0331 11:19:31.925373   21423 docker.go:313] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:19:31.925466   21423 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0331 11:19:31.946343   21423 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0331 11:19:32.179365   21423 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0331 11:19:32.200302   21423 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0331 11:19:32.200329   21423 docker.go:313] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0331 11:19:32.200402   21423 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0331 11:19:32.219375   21423 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0331 11:19:32.959502   21423 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:19:32.980367   21423 cache_images.go:92] LoadImages completed in 3.252473622s
	W0331 11:19:32.980455   21423 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2: no such file or directory
	I0331 11:19:32.980538   21423 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 11:19:33.007114   21423 cni.go:84] Creating CNI manager for ""
	I0331 11:19:33.007139   21423 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 11:19:33.007158   21423 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 11:19:33.007172   21423 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-221000 NodeName:old-k8s-version-221000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 11:19:33.007266   21423 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-221000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-221000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 11:19:33.007334   21423 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-221000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 11:19:33.007397   21423 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0331 11:19:33.015359   21423 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 11:19:33.015415   21423 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 11:19:33.023159   21423 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0331 11:19:33.036371   21423 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0331 11:19:33.049882   21423 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0331 11:19:33.063798   21423 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0331 11:19:33.067937   21423 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:19:33.078272   21423 certs.go:56] Setting up /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000 for IP: 192.168.76.2
	I0331 11:19:33.078292   21423 certs.go:186] acquiring lock for shared ca certs: {Name:mk1ddc355573fb6044e73c93dd0e9bf4bae32052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:19:33.078443   21423 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key
	I0331 11:19:33.078491   21423 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key
	I0331 11:19:33.078576   21423 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/client.key
	I0331 11:19:33.078647   21423 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.key.31bdca25
	I0331 11:19:33.078707   21423 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.key
	I0331 11:19:33.078909   21423 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem (1338 bytes)
	W0331 11:19:33.078944   21423 certs.go:397] ignoring /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800_empty.pem, impossibly tiny 0 bytes
	I0331 11:19:33.078956   21423 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem (1679 bytes)
	I0331 11:19:33.078986   21423 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem (1078 bytes)
	I0331 11:19:33.079020   21423 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem (1123 bytes)
	I0331 11:19:33.079049   21423 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem (1679 bytes)
	I0331 11:19:33.079117   21423 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:19:33.079619   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 11:19:33.098388   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0331 11:19:33.116248   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 11:19:33.133986   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/old-k8s-version-221000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 11:19:33.151648   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 11:19:33.169329   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0331 11:19:33.187311   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 11:19:33.205116   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 11:19:33.222703   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /usr/share/ca-certificates/28002.pem (1708 bytes)
	I0331 11:19:33.240569   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 11:19:33.258636   21423 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem --> /usr/share/ca-certificates/2800.pem (1338 bytes)
	I0331 11:19:33.276352   21423 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 11:19:33.289454   21423 ssh_runner.go:195] Run: openssl version
	I0331 11:19:33.295500   21423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28002.pem && ln -fs /usr/share/ca-certificates/28002.pem /etc/ssl/certs/28002.pem"
	I0331 11:19:33.304015   21423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28002.pem
	I0331 11:19:33.308191   21423 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/28002.pem
	I0331 11:19:33.308235   21423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28002.pem
	I0331 11:19:33.313870   21423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28002.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 11:19:33.321655   21423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 11:19:33.330181   21423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:19:33.334731   21423 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:19:33.334782   21423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:19:33.340173   21423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 11:19:33.347908   21423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2800.pem && ln -fs /usr/share/ca-certificates/2800.pem /etc/ssl/certs/2800.pem"
	I0331 11:19:33.356065   21423 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2800.pem
	I0331 11:19:33.360402   21423 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/2800.pem
	I0331 11:19:33.360446   21423 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2800.pem
	I0331 11:19:33.366201   21423 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2800.pem /etc/ssl/certs/51391683.0"
	I0331 11:19:33.374213   21423 kubeadm.go:401] StartCluster: {Name:old-k8s-version-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-221000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:19:33.374311   21423 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:19:33.394081   21423 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 11:19:33.402281   21423 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0331 11:19:33.402296   21423 kubeadm.go:633] restartCluster start
	I0331 11:19:33.402347   21423 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0331 11:19:33.409633   21423 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:33.409704   21423 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-221000
	I0331 11:19:33.473100   21423 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-221000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:19:33.473255   21423 kubeconfig.go:146] "old-k8s-version-221000" context is missing from /Users/jenkins/minikube-integration/16144-2324/kubeconfig - will repair!
	I0331 11:19:33.473563   21423 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/kubeconfig: {Name:mkc0b1389479e511140b6b42bee4e1f98dfd2b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:19:33.475013   21423 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0331 11:19:33.483327   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:33.483379   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:33.492310   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:33.994395   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:33.994532   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:34.005831   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:34.492349   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:34.492500   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:34.503323   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:34.994402   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:34.994596   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:35.005902   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:35.492493   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:35.492682   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:35.503904   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:35.992475   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:35.992592   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:36.003977   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:36.493144   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:36.493239   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:36.503242   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:36.992911   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:36.993107   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:37.004338   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:37.494311   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:37.494466   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:37.506098   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:37.994226   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:37.994408   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:38.005612   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:38.493712   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:38.493919   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:38.505601   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:38.992994   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:38.993166   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:39.004583   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:39.492172   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:39.492274   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:39.502665   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:39.992092   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:39.992245   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:40.003368   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:40.492570   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:40.492683   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:40.503768   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:40.992990   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:40.993092   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:41.003986   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:41.492851   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:41.493013   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:41.503616   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:41.993920   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:41.994069   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:42.005308   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:42.492282   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:42.492438   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:42.501814   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:42.993950   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:42.994152   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:43.005611   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:43.492089   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:43.492198   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:43.502384   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:43.502395   21423 api_server.go:165] Checking apiserver status ...
	I0331 11:19:43.502448   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:19:43.511232   21423 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:19:43.511244   21423 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0331 11:19:43.511251   21423 kubeadm.go:1120] stopping kube-system containers ...
	I0331 11:19:43.511322   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:19:43.530003   21423 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0331 11:19:43.541169   21423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:19:43.549221   21423 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Mar 31 18:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Mar 31 18:15 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Mar 31 18:15 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Mar 31 18:15 /etc/kubernetes/scheduler.conf
	
	I0331 11:19:43.549276   21423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0331 11:19:43.557014   21423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0331 11:19:43.564723   21423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0331 11:19:43.572513   21423 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0331 11:19:43.580106   21423 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:19:43.588916   21423 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0331 11:19:43.588932   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:19:43.642856   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:19:44.238662   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:19:44.406691   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:19:44.463631   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:19:44.549092   21423 api_server.go:51] waiting for apiserver process to appear ...
	I0331 11:19:44.549163   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:45.058548   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:45.558252   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:46.058186   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:46.558645   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:47.058280   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:47.558124   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:48.058783   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:48.558114   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:49.058678   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:49.558777   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:50.059070   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:50.557925   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:51.058341   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:51.558554   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:52.059220   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:52.557830   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:53.057965   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:53.558041   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:54.059534   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:54.557788   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:55.058263   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:55.557947   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:56.058382   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:56.557686   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:57.059680   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:57.557743   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:58.057834   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:58.558850   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:59.057889   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:19:59.557580   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:00.057535   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:00.558315   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:01.057847   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:01.557383   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:02.057528   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:02.557418   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:03.057356   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:03.557656   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:04.057251   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:04.557954   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:05.058002   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:05.557264   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:06.057253   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:06.557431   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:07.057078   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:07.557772   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:08.057867   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:08.557235   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:09.058756   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:09.557475   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:10.057095   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:10.558295   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:11.057094   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:11.557019   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:12.057472   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:12.557015   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:13.057872   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:13.557973   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:14.056819   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:14.556903   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:15.056905   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:15.556918   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:16.056700   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:16.558706   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:17.056871   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:17.557508   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:18.056695   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:18.557438   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:19.056848   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:19.557548   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:20.056854   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:20.556594   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:21.056480   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:21.556365   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:22.056447   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:22.556764   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:23.056672   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:23.557215   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:24.056344   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:24.556822   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:25.056804   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:25.556818   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:26.057116   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:26.556447   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:27.056483   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:27.556131   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:28.056088   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:28.556154   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:29.056033   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:29.556099   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:30.056282   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:30.557355   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:31.056003   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:31.556105   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:32.056107   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:32.556850   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:33.055902   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:33.555836   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:34.055890   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:34.555855   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:35.056607   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:35.555654   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:36.055659   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:36.556014   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:37.056596   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:37.555586   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:38.055605   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:38.555532   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:39.055627   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:39.555669   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:40.056346   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:40.555443   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:41.057620   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:41.555378   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:42.055385   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:42.555418   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:43.055982   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:43.557252   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:44.055233   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:44.555530   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:20:44.577003   21423 logs.go:277] 0 containers: []
	W0331 11:20:44.577016   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:20:44.577085   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:20:44.597304   21423 logs.go:277] 0 containers: []
	W0331 11:20:44.597317   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:20:44.597386   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:20:44.617628   21423 logs.go:277] 0 containers: []
	W0331 11:20:44.617643   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:20:44.617710   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:20:44.636764   21423 logs.go:277] 0 containers: []
	W0331 11:20:44.636778   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:20:44.636843   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:20:44.655532   21423 logs.go:277] 0 containers: []
	W0331 11:20:44.655548   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:20:44.655616   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:20:44.674476   21423 logs.go:277] 0 containers: []
	W0331 11:20:44.674490   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:20:44.674556   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:20:44.693949   21423 logs.go:277] 0 containers: []
	W0331 11:20:44.693964   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:20:44.694039   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:20:44.713668   21423 logs.go:277] 0 containers: []
	W0331 11:20:44.713682   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:20:44.713689   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:20:44.713700   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:20:44.754620   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:20:44.754638   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:20:44.767953   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:20:44.767970   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:20:44.824894   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:20:44.824911   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:20:44.824917   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:20:44.853429   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:20:44.853448   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:20:46.903088   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049728377s)
	I0331 11:20:49.403823   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:49.555706   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:20:49.576694   21423 logs.go:277] 0 containers: []
	W0331 11:20:49.576708   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:20:49.576782   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:20:49.596625   21423 logs.go:277] 0 containers: []
	W0331 11:20:49.596638   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:20:49.596719   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:20:49.616512   21423 logs.go:277] 0 containers: []
	W0331 11:20:49.616525   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:20:49.616597   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:20:49.637938   21423 logs.go:277] 0 containers: []
	W0331 11:20:49.637955   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:20:49.638035   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:20:49.658227   21423 logs.go:277] 0 containers: []
	W0331 11:20:49.658240   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:20:49.658312   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:20:49.677429   21423 logs.go:277] 0 containers: []
	W0331 11:20:49.677441   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:20:49.677509   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:20:49.698216   21423 logs.go:277] 0 containers: []
	W0331 11:20:49.698229   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:20:49.698295   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:20:49.718489   21423 logs.go:277] 0 containers: []
	W0331 11:20:49.718501   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:20:49.718508   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:20:49.718516   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:20:49.757123   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:20:49.757143   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:20:49.770251   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:20:49.770268   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:20:49.830072   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:20:49.830091   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:20:49.830099   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:20:49.854640   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:20:49.854655   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:20:51.901436   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046872272s)
	I0331 11:20:54.401745   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:54.555601   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:20:54.578564   21423 logs.go:277] 0 containers: []
	W0331 11:20:54.578579   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:20:54.578649   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:20:54.598298   21423 logs.go:277] 0 containers: []
	W0331 11:20:54.598312   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:20:54.598380   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:20:54.619602   21423 logs.go:277] 0 containers: []
	W0331 11:20:54.619616   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:20:54.619683   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:20:54.640139   21423 logs.go:277] 0 containers: []
	W0331 11:20:54.640154   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:20:54.640235   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:20:54.662802   21423 logs.go:277] 0 containers: []
	W0331 11:20:54.662815   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:20:54.662886   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:20:54.682226   21423 logs.go:277] 0 containers: []
	W0331 11:20:54.682241   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:20:54.682316   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:20:54.701430   21423 logs.go:277] 0 containers: []
	W0331 11:20:54.701443   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:20:54.701519   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:20:54.722017   21423 logs.go:277] 0 containers: []
	W0331 11:20:54.722030   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:20:54.722038   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:20:54.722045   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:20:54.734165   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:20:54.734179   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:20:54.790137   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:20:54.790150   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:20:54.790157   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:20:54.815350   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:20:54.815363   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:20:56.860666   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04539314s)
	I0331 11:20:56.860775   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:20:56.860784   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:20:59.400304   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:20:59.554561   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:20:59.573924   21423 logs.go:277] 0 containers: []
	W0331 11:20:59.573939   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:20:59.574013   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:20:59.593361   21423 logs.go:277] 0 containers: []
	W0331 11:20:59.593375   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:20:59.593444   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:20:59.621851   21423 logs.go:277] 0 containers: []
	W0331 11:20:59.621871   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:20:59.621967   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:20:59.648742   21423 logs.go:277] 0 containers: []
	W0331 11:20:59.648760   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:20:59.648829   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:20:59.668233   21423 logs.go:277] 0 containers: []
	W0331 11:20:59.668252   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:20:59.668318   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:20:59.687354   21423 logs.go:277] 0 containers: []
	W0331 11:20:59.687367   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:20:59.687433   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:20:59.721199   21423 logs.go:277] 0 containers: []
	W0331 11:20:59.721219   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:20:59.721307   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:20:59.748812   21423 logs.go:277] 0 containers: []
	W0331 11:20:59.748825   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:20:59.748833   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:20:59.748840   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:20:59.790197   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:20:59.790213   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:20:59.807034   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:20:59.807070   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:20:59.879892   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:20:59.879909   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:20:59.879916   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:20:59.906899   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:20:59.906921   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:01.962681   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055851542s)
	I0331 11:21:04.462761   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:04.554197   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:04.575828   21423 logs.go:277] 0 containers: []
	W0331 11:21:04.575842   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:04.575922   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:04.596969   21423 logs.go:277] 0 containers: []
	W0331 11:21:04.596985   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:04.597089   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:04.621173   21423 logs.go:277] 0 containers: []
	W0331 11:21:04.621193   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:04.621281   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:04.642324   21423 logs.go:277] 0 containers: []
	W0331 11:21:04.642338   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:04.642406   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:04.662771   21423 logs.go:277] 0 containers: []
	W0331 11:21:04.662788   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:04.662866   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:04.684032   21423 logs.go:277] 0 containers: []
	W0331 11:21:04.684046   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:04.684109   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:04.705453   21423 logs.go:277] 0 containers: []
	W0331 11:21:04.705469   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:04.705614   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:04.727124   21423 logs.go:277] 0 containers: []
	W0331 11:21:04.727141   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:04.727148   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:04.727158   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:04.756904   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:04.756923   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:06.806564   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049731008s)
	I0331 11:21:06.806711   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:06.806723   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:06.855106   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:06.855132   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:06.867769   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:06.867782   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:06.926573   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:09.426631   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:09.554003   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:09.573582   21423 logs.go:277] 0 containers: []
	W0331 11:21:09.573598   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:09.573668   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:09.600201   21423 logs.go:277] 0 containers: []
	W0331 11:21:09.600216   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:09.600280   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:09.620356   21423 logs.go:277] 0 containers: []
	W0331 11:21:09.620369   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:09.620436   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:09.641701   21423 logs.go:277] 0 containers: []
	W0331 11:21:09.641719   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:09.641796   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:09.661554   21423 logs.go:277] 0 containers: []
	W0331 11:21:09.661568   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:09.661637   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:09.681393   21423 logs.go:277] 0 containers: []
	W0331 11:21:09.681407   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:09.681475   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:09.702976   21423 logs.go:277] 0 containers: []
	W0331 11:21:09.702990   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:09.703056   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:09.723412   21423 logs.go:277] 0 containers: []
	W0331 11:21:09.723426   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:09.723436   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:09.723444   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:09.762142   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:09.762161   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:09.777495   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:09.777510   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:09.840568   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:09.840580   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:09.840587   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:09.866947   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:09.866966   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:11.918597   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051717895s)
	I0331 11:21:14.419854   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:14.553764   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:14.573444   21423 logs.go:277] 0 containers: []
	W0331 11:21:14.573458   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:14.573532   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:14.594327   21423 logs.go:277] 0 containers: []
	W0331 11:21:14.594340   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:14.594410   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:14.615235   21423 logs.go:277] 0 containers: []
	W0331 11:21:14.615254   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:14.615331   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:14.639062   21423 logs.go:277] 0 containers: []
	W0331 11:21:14.639086   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:14.639178   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:14.662604   21423 logs.go:277] 0 containers: []
	W0331 11:21:14.662618   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:14.662691   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:14.682623   21423 logs.go:277] 0 containers: []
	W0331 11:21:14.682637   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:14.682707   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:14.703039   21423 logs.go:277] 0 containers: []
	W0331 11:21:14.703053   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:14.703116   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:14.723083   21423 logs.go:277] 0 containers: []
	W0331 11:21:14.723099   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:14.723106   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:14.723113   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:14.749604   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:14.749622   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:16.797130   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047597671s)
	I0331 11:21:16.797252   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:16.797261   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:16.835707   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:16.835724   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:16.848286   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:16.848300   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:16.903471   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:19.405512   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:19.554203   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:19.575663   21423 logs.go:277] 0 containers: []
	W0331 11:21:19.575676   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:19.575744   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:19.595979   21423 logs.go:277] 0 containers: []
	W0331 11:21:19.595993   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:19.596060   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:19.615581   21423 logs.go:277] 0 containers: []
	W0331 11:21:19.615595   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:19.615662   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:19.635682   21423 logs.go:277] 0 containers: []
	W0331 11:21:19.635695   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:19.635761   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:19.655887   21423 logs.go:277] 0 containers: []
	W0331 11:21:19.655901   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:19.655970   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:19.675593   21423 logs.go:277] 0 containers: []
	W0331 11:21:19.675606   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:19.675674   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:19.695145   21423 logs.go:277] 0 containers: []
	W0331 11:21:19.695159   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:19.695229   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:19.715347   21423 logs.go:277] 0 containers: []
	W0331 11:21:19.715360   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:19.715368   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:19.715376   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:21.762348   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047060212s)
	I0331 11:21:21.762459   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:21.762467   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:21.799930   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:21.799943   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:21.811810   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:21.811824   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:21.866983   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:21.866999   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:21.867007   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:24.392146   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:24.553498   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:24.574206   21423 logs.go:277] 0 containers: []
	W0331 11:21:24.574219   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:24.574285   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:24.593388   21423 logs.go:277] 0 containers: []
	W0331 11:21:24.593404   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:24.593481   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:24.613854   21423 logs.go:277] 0 containers: []
	W0331 11:21:24.613867   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:24.613933   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:24.634045   21423 logs.go:277] 0 containers: []
	W0331 11:21:24.634058   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:24.634128   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:24.653143   21423 logs.go:277] 0 containers: []
	W0331 11:21:24.653157   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:24.653225   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:24.674770   21423 logs.go:277] 0 containers: []
	W0331 11:21:24.674783   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:24.674856   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:24.694745   21423 logs.go:277] 0 containers: []
	W0331 11:21:24.694758   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:24.694829   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:24.715636   21423 logs.go:277] 0 containers: []
	W0331 11:21:24.715649   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:24.715656   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:24.715663   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:26.759694   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04412086s)
	I0331 11:21:26.759804   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:26.759812   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:26.797004   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:26.797018   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:26.810096   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:26.810113   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:26.868721   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:26.868735   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:26.868742   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:29.394046   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:29.553188   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:29.576286   21423 logs.go:277] 0 containers: []
	W0331 11:21:29.576299   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:29.576367   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:29.597437   21423 logs.go:277] 0 containers: []
	W0331 11:21:29.597451   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:29.597516   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:29.618960   21423 logs.go:277] 0 containers: []
	W0331 11:21:29.618973   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:29.619031   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:29.642018   21423 logs.go:277] 0 containers: []
	W0331 11:21:29.642033   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:29.642110   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:29.662819   21423 logs.go:277] 0 containers: []
	W0331 11:21:29.662832   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:29.662899   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:29.683682   21423 logs.go:277] 0 containers: []
	W0331 11:21:29.683697   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:29.683772   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:29.703888   21423 logs.go:277] 0 containers: []
	W0331 11:21:29.703902   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:29.703969   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:29.724369   21423 logs.go:277] 0 containers: []
	W0331 11:21:29.724386   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:29.724394   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:29.724402   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:29.764212   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:29.764231   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:29.777424   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:29.777440   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:29.839646   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:29.839657   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:29.839665   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:29.868842   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:29.868870   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:31.919366   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050584452s)
	I0331 11:21:34.419578   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:34.552731   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:34.573446   21423 logs.go:277] 0 containers: []
	W0331 11:21:34.573465   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:34.573531   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:34.595395   21423 logs.go:277] 0 containers: []
	W0331 11:21:34.595410   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:34.595488   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:34.615986   21423 logs.go:277] 0 containers: []
	W0331 11:21:34.616000   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:34.616065   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:34.637367   21423 logs.go:277] 0 containers: []
	W0331 11:21:34.637385   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:34.637485   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:34.657635   21423 logs.go:277] 0 containers: []
	W0331 11:21:34.657648   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:34.657719   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:34.678142   21423 logs.go:277] 0 containers: []
	W0331 11:21:34.678155   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:34.678220   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:34.699383   21423 logs.go:277] 0 containers: []
	W0331 11:21:34.699397   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:34.699495   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:34.720829   21423 logs.go:277] 0 containers: []
	W0331 11:21:34.720844   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:34.720851   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:34.720860   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:34.763801   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:34.763821   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:34.777677   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:34.777693   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:34.841963   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:34.841975   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:34.841982   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:34.868878   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:34.868895   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:36.921242   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052436328s)
	I0331 11:21:39.421360   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:39.552501   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:39.575305   21423 logs.go:277] 0 containers: []
	W0331 11:21:39.575338   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:39.575429   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:39.604464   21423 logs.go:277] 0 containers: []
	W0331 11:21:39.604480   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:39.604563   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:39.628299   21423 logs.go:277] 0 containers: []
	W0331 11:21:39.628312   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:39.628390   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:39.656261   21423 logs.go:277] 0 containers: []
	W0331 11:21:39.656276   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:39.656367   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:39.681788   21423 logs.go:277] 0 containers: []
	W0331 11:21:39.681802   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:39.681879   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:39.705826   21423 logs.go:277] 0 containers: []
	W0331 11:21:39.705840   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:39.705914   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:39.731262   21423 logs.go:277] 0 containers: []
	W0331 11:21:39.731280   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:39.731362   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:39.764661   21423 logs.go:277] 0 containers: []
	W0331 11:21:39.764676   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:39.764685   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:39.764700   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:39.806794   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:39.806813   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:39.820761   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:39.820779   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:39.892837   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:39.892850   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:39.892858   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:39.923755   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:39.923782   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:41.971583   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047884576s)
	I0331 11:21:44.473402   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:44.552458   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:44.574627   21423 logs.go:277] 0 containers: []
	W0331 11:21:44.574642   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:44.574716   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:44.594866   21423 logs.go:277] 0 containers: []
	W0331 11:21:44.594879   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:44.594950   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:44.615570   21423 logs.go:277] 0 containers: []
	W0331 11:21:44.615584   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:44.615657   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:44.637828   21423 logs.go:277] 0 containers: []
	W0331 11:21:44.637839   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:44.637902   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:44.658752   21423 logs.go:277] 0 containers: []
	W0331 11:21:44.658764   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:44.658846   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:44.678094   21423 logs.go:277] 0 containers: []
	W0331 11:21:44.678107   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:44.678184   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:44.698633   21423 logs.go:277] 0 containers: []
	W0331 11:21:44.698647   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:44.698714   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:44.718362   21423 logs.go:277] 0 containers: []
	W0331 11:21:44.718376   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:44.718384   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:44.718392   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:44.758274   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:44.758296   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:44.772728   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:44.772745   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:44.838851   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:44.838865   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:44.838871   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:44.866039   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:44.866057   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:46.916325   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050357143s)
	I0331 11:21:49.418125   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:49.552260   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:49.572889   21423 logs.go:277] 0 containers: []
	W0331 11:21:49.572903   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:49.572975   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:49.593544   21423 logs.go:277] 0 containers: []
	W0331 11:21:49.593561   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:49.593640   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:49.616892   21423 logs.go:277] 0 containers: []
	W0331 11:21:49.616910   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:49.616986   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:49.639618   21423 logs.go:277] 0 containers: []
	W0331 11:21:49.639634   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:49.639712   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:49.660523   21423 logs.go:277] 0 containers: []
	W0331 11:21:49.660536   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:49.660602   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:49.679963   21423 logs.go:277] 0 containers: []
	W0331 11:21:49.679977   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:49.680046   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:49.699275   21423 logs.go:277] 0 containers: []
	W0331 11:21:49.699289   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:49.699366   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:49.720790   21423 logs.go:277] 0 containers: []
	W0331 11:21:49.720804   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:49.720812   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:49.720822   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:49.765502   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:49.765517   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:49.777742   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:49.777754   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:49.837430   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:49.837442   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:49.837449   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:49.863720   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:49.863738   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:51.914684   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051033664s)
	I0331 11:21:54.414920   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:54.551809   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:54.572944   21423 logs.go:277] 0 containers: []
	W0331 11:21:54.572958   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:54.573048   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:54.593288   21423 logs.go:277] 0 containers: []
	W0331 11:21:54.593306   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:54.593380   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:54.612664   21423 logs.go:277] 0 containers: []
	W0331 11:21:54.612678   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:54.612746   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:54.633238   21423 logs.go:277] 0 containers: []
	W0331 11:21:54.633252   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:54.633324   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:54.653088   21423 logs.go:277] 0 containers: []
	W0331 11:21:54.653102   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:54.653169   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:54.672373   21423 logs.go:277] 0 containers: []
	W0331 11:21:54.672386   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:54.672469   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:54.691999   21423 logs.go:277] 0 containers: []
	W0331 11:21:54.692012   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:54.692077   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:54.713146   21423 logs.go:277] 0 containers: []
	W0331 11:21:54.713158   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:54.713166   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:54.713177   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:54.725193   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:54.725209   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:54.782721   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:54.782737   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:54.782744   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:54.807372   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:54.807388   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:21:56.851618   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044313601s)
	I0331 11:21:56.851766   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:56.851775   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:59.389967   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:21:59.553629   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:21:59.575763   21423 logs.go:277] 0 containers: []
	W0331 11:21:59.575777   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:21:59.575843   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:21:59.597024   21423 logs.go:277] 0 containers: []
	W0331 11:21:59.597037   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:21:59.597104   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:21:59.616065   21423 logs.go:277] 0 containers: []
	W0331 11:21:59.616080   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:21:59.616152   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:21:59.637091   21423 logs.go:277] 0 containers: []
	W0331 11:21:59.637105   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:21:59.637178   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:21:59.657892   21423 logs.go:277] 0 containers: []
	W0331 11:21:59.657907   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:21:59.657980   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:21:59.678514   21423 logs.go:277] 0 containers: []
	W0331 11:21:59.678526   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:21:59.678595   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:21:59.698386   21423 logs.go:277] 0 containers: []
	W0331 11:21:59.698401   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:21:59.698467   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:21:59.719146   21423 logs.go:277] 0 containers: []
	W0331 11:21:59.719162   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:21:59.719171   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:21:59.719178   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:21:59.758146   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:21:59.758166   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:21:59.771143   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:21:59.771162   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:21:59.834279   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:21:59.834295   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:21:59.834302   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:21:59.858477   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:21:59.858491   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:01.906135   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047734365s)
	I0331 11:22:04.406309   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:04.551990   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:04.574354   21423 logs.go:277] 0 containers: []
	W0331 11:22:04.574366   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:04.574436   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:04.593792   21423 logs.go:277] 0 containers: []
	W0331 11:22:04.593802   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:04.593859   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:04.615236   21423 logs.go:277] 0 containers: []
	W0331 11:22:04.615250   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:04.615321   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:04.636022   21423 logs.go:277] 0 containers: []
	W0331 11:22:04.636037   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:04.636105   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:04.655735   21423 logs.go:277] 0 containers: []
	W0331 11:22:04.655753   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:04.655846   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:04.675969   21423 logs.go:277] 0 containers: []
	W0331 11:22:04.675983   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:04.676051   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:04.695449   21423 logs.go:277] 0 containers: []
	W0331 11:22:04.695462   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:04.695529   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:04.714368   21423 logs.go:277] 0 containers: []
	W0331 11:22:04.714381   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:04.714389   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:04.714397   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:04.770687   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:04.770701   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:04.770708   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:04.795395   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:04.795410   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:06.842700   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04737172s)
	I0331 11:22:06.842814   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:06.842824   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:06.880027   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:06.880045   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:09.392275   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:09.551120   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:09.572465   21423 logs.go:277] 0 containers: []
	W0331 11:22:09.572478   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:09.572551   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:09.592771   21423 logs.go:277] 0 containers: []
	W0331 11:22:09.592783   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:09.592842   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:09.613125   21423 logs.go:277] 0 containers: []
	W0331 11:22:09.613138   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:09.613205   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:09.633139   21423 logs.go:277] 0 containers: []
	W0331 11:22:09.633154   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:09.633265   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:09.653605   21423 logs.go:277] 0 containers: []
	W0331 11:22:09.653621   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:09.653691   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:09.674109   21423 logs.go:277] 0 containers: []
	W0331 11:22:09.674122   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:09.674191   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:09.694988   21423 logs.go:277] 0 containers: []
	W0331 11:22:09.695001   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:09.695069   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:09.714170   21423 logs.go:277] 0 containers: []
	W0331 11:22:09.714184   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:09.714191   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:09.714198   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:09.740472   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:09.740489   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:11.785904   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045505291s)
	I0331 11:22:11.786012   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:11.786019   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:11.838634   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:11.838654   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:11.851037   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:11.851052   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:11.918974   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:14.419624   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:14.551713   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:14.572939   21423 logs.go:277] 0 containers: []
	W0331 11:22:14.572952   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:14.573016   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:14.592647   21423 logs.go:277] 0 containers: []
	W0331 11:22:14.592660   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:14.592729   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:14.613306   21423 logs.go:277] 0 containers: []
	W0331 11:22:14.613319   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:14.613385   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:14.633430   21423 logs.go:277] 0 containers: []
	W0331 11:22:14.633440   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:14.633521   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:14.652121   21423 logs.go:277] 0 containers: []
	W0331 11:22:14.652137   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:14.652202   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:14.672110   21423 logs.go:277] 0 containers: []
	W0331 11:22:14.672124   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:14.672194   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:14.691707   21423 logs.go:277] 0 containers: []
	W0331 11:22:14.691720   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:14.691786   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:14.712748   21423 logs.go:277] 0 containers: []
	W0331 11:22:14.712760   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:14.712767   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:14.712775   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:14.737681   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:14.737698   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:16.783385   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045772501s)
	I0331 11:22:16.783499   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:16.783508   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:16.823864   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:16.823880   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:16.836161   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:16.836175   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:16.890529   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:19.390550   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:19.551655   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:19.571863   21423 logs.go:277] 0 containers: []
	W0331 11:22:19.571875   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:19.571941   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:19.591527   21423 logs.go:277] 0 containers: []
	W0331 11:22:19.591541   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:19.591607   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:19.610661   21423 logs.go:277] 0 containers: []
	W0331 11:22:19.610674   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:19.610741   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:19.629877   21423 logs.go:277] 0 containers: []
	W0331 11:22:19.629890   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:19.629958   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:19.649233   21423 logs.go:277] 0 containers: []
	W0331 11:22:19.649248   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:19.649321   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:19.669126   21423 logs.go:277] 0 containers: []
	W0331 11:22:19.669140   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:19.669207   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:19.687892   21423 logs.go:277] 0 containers: []
	W0331 11:22:19.687906   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:19.687971   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:19.708235   21423 logs.go:277] 0 containers: []
	W0331 11:22:19.708249   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:19.708256   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:19.708263   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:19.749456   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:19.749472   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:19.761975   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:19.761990   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:19.817271   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:19.817285   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:19.817293   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:19.842106   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:19.842125   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:21.890943   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048906866s)
	I0331 11:22:24.391560   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:24.550202   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:24.570349   21423 logs.go:277] 0 containers: []
	W0331 11:22:24.570363   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:24.570432   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:24.590614   21423 logs.go:277] 0 containers: []
	W0331 11:22:24.590629   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:24.590697   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:24.609345   21423 logs.go:277] 0 containers: []
	W0331 11:22:24.609358   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:24.609426   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:24.629275   21423 logs.go:277] 0 containers: []
	W0331 11:22:24.629288   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:24.629356   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:24.648234   21423 logs.go:277] 0 containers: []
	W0331 11:22:24.648247   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:24.648318   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:24.668355   21423 logs.go:277] 0 containers: []
	W0331 11:22:24.668368   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:24.668436   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:24.687794   21423 logs.go:277] 0 containers: []
	W0331 11:22:24.687806   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:24.687871   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:24.706219   21423 logs.go:277] 0 containers: []
	W0331 11:22:24.706232   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:24.706239   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:24.706246   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:24.731226   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:24.731241   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:26.776011   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044860423s)
	I0331 11:22:26.776122   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:26.776131   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:26.815853   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:26.815871   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:26.828380   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:26.828395   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:26.886975   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:29.387354   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:29.552095   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:29.573855   21423 logs.go:277] 0 containers: []
	W0331 11:22:29.573869   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:29.573935   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:29.593160   21423 logs.go:277] 0 containers: []
	W0331 11:22:29.593173   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:29.593239   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:29.611979   21423 logs.go:277] 0 containers: []
	W0331 11:22:29.611992   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:29.612060   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:29.631025   21423 logs.go:277] 0 containers: []
	W0331 11:22:29.631039   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:29.631105   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:29.650333   21423 logs.go:277] 0 containers: []
	W0331 11:22:29.650347   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:29.650420   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:29.670393   21423 logs.go:277] 0 containers: []
	W0331 11:22:29.670406   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:29.670474   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:29.690622   21423 logs.go:277] 0 containers: []
	W0331 11:22:29.690636   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:29.690704   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:29.710437   21423 logs.go:277] 0 containers: []
	W0331 11:22:29.710452   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:29.710462   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:29.710472   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:29.722710   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:29.722728   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:29.798718   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:29.798738   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:29.798751   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:29.822914   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:29.822931   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:31.868813   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045973015s)
	I0331 11:22:31.868925   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:31.868933   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:34.407104   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:34.550580   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:34.572831   21423 logs.go:277] 0 containers: []
	W0331 11:22:34.572844   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:34.572912   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:34.593024   21423 logs.go:277] 0 containers: []
	W0331 11:22:34.593037   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:34.593096   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:34.612899   21423 logs.go:277] 0 containers: []
	W0331 11:22:34.612914   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:34.612984   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:34.631543   21423 logs.go:277] 0 containers: []
	W0331 11:22:34.631557   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:34.631623   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:34.652474   21423 logs.go:277] 0 containers: []
	W0331 11:22:34.652488   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:34.652556   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:34.672888   21423 logs.go:277] 0 containers: []
	W0331 11:22:34.672901   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:34.672967   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:34.691878   21423 logs.go:277] 0 containers: []
	W0331 11:22:34.691892   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:34.691957   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:34.710943   21423 logs.go:277] 0 containers: []
	W0331 11:22:34.710955   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:34.710963   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:34.710971   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:36.758474   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047593123s)
	I0331 11:22:36.758588   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:36.758596   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:36.796739   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:36.796753   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:36.809101   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:36.809116   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:36.863099   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:36.863111   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:36.863118   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:39.387501   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:39.549543   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:39.570732   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.570746   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:39.570823   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:39.590692   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.590704   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:39.590757   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:39.610330   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.610343   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:39.610415   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:39.630556   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.630570   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:39.630631   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:39.651980   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.651997   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:39.652064   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:39.673061   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.673075   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:39.673146   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:39.693030   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.693045   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:39.693114   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:39.713529   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.713543   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:39.713558   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:39.713569   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:41.758774   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045294043s)
	I0331 11:22:41.758888   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:41.758897   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:41.797644   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:41.797660   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:41.811325   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:41.811342   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:41.871317   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:41.871329   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:41.871337   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:44.400713   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:44.551389   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:44.572314   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.572328   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:44.572395   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:44.591776   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.591790   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:44.591860   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:44.611928   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.611941   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:44.612021   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:44.631330   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.631343   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:44.631407   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:44.650381   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.650394   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:44.650467   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:44.670175   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.670188   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:44.670254   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:44.690318   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.690331   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:44.690397   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:44.710044   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.710058   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:44.710065   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:44.710075   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:46.753388   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043402554s)
	I0331 11:22:46.753499   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:46.753507   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:46.792982   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:46.792997   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:46.805753   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:46.805770   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:46.861145   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:46.861159   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:46.861166   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:49.385848   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:49.551125   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:49.573615   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.573629   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:49.573695   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:49.592828   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.592841   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:49.592907   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:49.612151   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.612165   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:49.612231   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:49.631432   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.631446   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:49.631516   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:49.649789   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.649803   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:49.649870   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:49.668617   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.668630   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:49.668696   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:49.689002   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.689015   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:49.689080   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:49.708143   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.708155   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:49.708162   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:49.708170   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:49.732613   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:49.732626   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:51.780967   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048432075s)
	I0331 11:22:51.781076   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:51.781084   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:51.818484   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:51.818498   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:51.830558   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:51.830571   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:51.886014   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:54.388001   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:54.548741   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:54.569439   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.569457   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:54.569545   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:54.589915   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.589929   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:54.589997   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:54.609247   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.609261   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:54.609327   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:54.634600   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.634614   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:54.634682   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:54.654624   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.654637   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:54.654707   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:54.673470   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.673500   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:54.673577   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:54.692817   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.692832   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:54.692902   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:54.711886   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.711899   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:54.711906   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:54.711917   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:56.754826   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.042995424s)
	I0331 11:22:56.754931   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:56.754939   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:56.796025   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:56.796049   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:56.812656   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:56.812671   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:56.873895   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:56.873907   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:56.873914   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:59.400245   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:59.548658   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:59.571672   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.571685   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:59.571781   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:59.591528   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.591541   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:59.591612   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:59.611067   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.611081   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:59.611148   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:59.630062   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.630076   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:59.630144   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:59.649183   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.649205   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:59.649289   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:59.668767   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.668780   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:59.668848   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:59.687528   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.687541   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:59.687607   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:59.707449   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.707462   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:59.707468   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:59.707477   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:59.745586   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:59.745607   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:59.759082   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:59.759100   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:59.834730   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:59.834751   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:59.834759   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:59.860931   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:59.860947   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:01.908833   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047976847s)
	I0331 11:23:04.410288   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:04.548739   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:04.570362   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.570376   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:04.570446   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:04.590951   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.590964   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:04.591044   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:04.610733   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.610745   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:04.610809   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:04.629647   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.629662   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:04.629732   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:04.649186   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.649199   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:04.649268   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:04.668171   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.668184   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:04.668249   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:04.688656   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.688669   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:04.688735   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:04.707495   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.707509   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:04.707517   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:04.707525   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:04.744682   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:04.744696   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:04.757029   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:04.757043   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:04.815382   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:04.815400   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:04.815407   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:04.843261   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:04.843280   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:06.889723   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046534007s)
	I0331 11:23:09.389955   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:09.548411   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:09.569634   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.569648   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:09.569722   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:09.589780   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.589794   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:09.589875   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:09.610251   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.610264   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:09.610334   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:09.631293   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.631307   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:09.631376   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:09.651338   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.651351   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:09.651419   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:09.671354   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.671366   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:09.671431   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:09.691838   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.691850   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:09.691919   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:09.710869   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.710883   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:09.710891   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:09.710897   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:09.748387   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:09.748403   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:09.760528   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:09.760544   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:09.815205   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:09.815217   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:09.815224   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:09.841050   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:09.841064   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:11.889078   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048104155s)
	I0331 11:23:14.389278   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:14.547855   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:14.568170   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.568185   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:14.568254   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:14.587182   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.587196   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:14.587264   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:14.607141   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.607154   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:14.607220   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:14.626768   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.626781   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:14.626845   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:14.646419   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.646432   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:14.646512   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:14.666074   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.666087   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:14.666154   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:14.685705   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.685719   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:14.685787   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:14.705636   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.705649   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:14.705656   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:14.705664   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:14.742416   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:14.742434   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:14.755049   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:14.755064   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:14.815109   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:14.815121   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:14.815128   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:14.841394   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:14.841411   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:16.886623   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045303023s)
	I0331 11:23:19.386735   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:19.547705   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:19.568772   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.568786   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:19.568857   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:19.588754   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.588769   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:19.588836   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:19.608619   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.608634   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:19.608702   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:19.628847   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.628861   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:19.628928   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:19.647575   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.647588   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:19.647653   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:19.666801   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.666815   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:19.666881   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:19.686586   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.686598   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:19.686665   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:19.705851   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.705864   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:19.705871   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:19.705879   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:19.745708   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:19.745724   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:19.758179   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:19.758200   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:19.814044   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:19.814055   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:19.814063   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:19.840642   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:19.840660   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:21.886961   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046391826s)
	I0331 11:23:24.389125   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:24.547833   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:24.568815   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.568829   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:24.568897   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:24.588413   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.588427   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:24.588495   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:24.609351   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.609363   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:24.609437   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:24.629075   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.629088   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:24.629155   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:24.648570   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.648583   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:24.648653   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:24.668686   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.668701   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:24.668768   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:24.688604   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.688616   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:24.688689   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:24.708111   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.708124   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:24.708131   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:24.708141   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:26.751781   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043728803s)
	I0331 11:23:26.751890   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:26.751899   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:26.789873   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:26.789892   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:26.802516   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:26.802533   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:26.857353   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:26.857366   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:26.857373   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:29.382363   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:29.547107   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:29.566804   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.566818   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:29.566889   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:29.588028   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.588041   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:29.588105   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:29.614722   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.614742   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:29.614815   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:29.633253   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.633268   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:29.633333   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:29.653839   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.653852   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:29.653916   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:29.673874   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.673888   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:29.673955   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:29.694426   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.694440   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:29.694506   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:29.713682   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.713697   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:29.713709   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:29.713726   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:29.754068   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:29.754090   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:29.767831   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:29.767846   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:29.830629   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:29.830641   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:29.830649   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:29.856123   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:29.856139   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:31.904519   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048469725s)
	I0331 11:23:34.405590   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:34.547014   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:34.569730   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.569743   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:34.569809   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:34.590129   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.590143   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:34.590222   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:34.609190   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.609203   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:34.609271   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:34.627848   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.627861   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:34.627926   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:34.648147   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.648160   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:34.648227   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:34.668624   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.668636   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:34.668701   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:34.687703   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.687716   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:34.687783   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:34.707883   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.707896   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:34.707903   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:34.707909   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:34.745065   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:34.745080   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:34.757393   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:34.757406   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:34.812552   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:34.812567   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:34.812574   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:34.838823   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:34.838839   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:36.886260   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047513153s)
	I0331 11:23:39.386842   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:39.546523   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:39.566342   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.566356   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:39.566427   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:39.586917   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.586929   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:39.586998   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:39.609561   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.609575   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:39.609642   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:39.629222   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.629240   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:39.629309   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:39.649860   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.649873   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:39.649941   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:39.669798   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.669812   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:39.669881   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:39.690535   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.690549   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:39.690616   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:39.710540   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.710553   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:39.710560   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:39.710568   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:39.748068   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:39.748083   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:39.759938   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:39.759951   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:39.815348   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:39.815360   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:39.815368   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:39.840631   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:39.840645   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:41.889540   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04898731s)
	I0331 11:23:44.389901   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:44.547243   21423 kubeadm.go:637] restartCluster took 4m11.157494112s
	W0331 11:23:44.547360   21423 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0331 11:23:44.547394   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0331 11:23:44.962986   21423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:23:44.973068   21423 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:23:44.980982   21423 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:23:44.981032   21423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:23:44.988742   21423 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:23:44.988772   21423 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:23:45.037554   21423 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0331 11:23:45.037612   21423 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:23:45.208508   21423 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:23:45.208638   21423 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:23:45.208731   21423 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:23:45.365604   21423 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:23:45.366337   21423 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:23:45.373561   21423 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0331 11:23:45.444598   21423 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:23:45.465999   21423 out.go:204]   - Generating certificates and keys ...
	I0331 11:23:45.466091   21423 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:23:45.466180   21423 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:23:45.466318   21423 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0331 11:23:45.466386   21423 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0331 11:23:45.466450   21423 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0331 11:23:45.466497   21423 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0331 11:23:45.466554   21423 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0331 11:23:45.466619   21423 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0331 11:23:45.466680   21423 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0331 11:23:45.466775   21423 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0331 11:23:45.466811   21423 kubeadm.go:322] [certs] Using the existing "sa" key
	I0331 11:23:45.466859   21423 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:23:45.548669   21423 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:23:45.652195   21423 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:23:45.763221   21423 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:23:45.855620   21423 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:23:45.856133   21423 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:23:45.877402   21423 out.go:204]   - Booting up control plane ...
	I0331 11:23:45.877492   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:23:45.877558   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:23:45.877617   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:23:45.877690   21423 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:23:45.877817   21423 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:24:25.863271   21423 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 11:24:25.864251   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:24:25.864454   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:24:30.865616   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:24:30.865866   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:24:40.865752   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:24:40.865911   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:25:00.865639   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:25:00.865820   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:25:40.864905   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:25:40.865082   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:25:40.865103   21423 kubeadm.go:322] 
	I0331 11:25:40.865148   21423 kubeadm.go:322] Unfortunately, an error has occurred:
	I0331 11:25:40.865184   21423 kubeadm.go:322] 	timed out waiting for the condition
	I0331 11:25:40.865191   21423 kubeadm.go:322] 
	I0331 11:25:40.865240   21423 kubeadm.go:322] This error is likely caused by:
	I0331 11:25:40.865268   21423 kubeadm.go:322] 	- The kubelet is not running
	I0331 11:25:40.865338   21423 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 11:25:40.865344   21423 kubeadm.go:322] 
	I0331 11:25:40.865456   21423 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 11:25:40.865483   21423 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0331 11:25:40.865506   21423 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0331 11:25:40.865510   21423 kubeadm.go:322] 
	I0331 11:25:40.865625   21423 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 11:25:40.865729   21423 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0331 11:25:40.865808   21423 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0331 11:25:40.865845   21423 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0331 11:25:40.865897   21423 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0331 11:25:40.865923   21423 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0331 11:25:40.868976   21423 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 11:25:40.869053   21423 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 11:25:40.869163   21423 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
	I0331 11:25:40.869275   21423 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:25:40.869348   21423 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 11:25:40.869414   21423 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0331 11:25:40.869542   21423 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0331 11:25:40.869583   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0331 11:25:41.282624   21423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:25:41.292920   21423 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:25:41.292974   21423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:25:41.300757   21423 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:25:41.300777   21423 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:25:41.349949   21423 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0331 11:25:41.350001   21423 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:25:41.523944   21423 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:25:41.524033   21423 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:25:41.524131   21423 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:25:41.683779   21423 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:25:41.684705   21423 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:25:41.691576   21423 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0331 11:25:41.766654   21423 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:25:41.788377   21423 out.go:204]   - Generating certificates and keys ...
	I0331 11:25:41.788466   21423 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:25:41.788531   21423 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:25:41.788592   21423 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0331 11:25:41.788672   21423 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0331 11:25:41.788757   21423 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0331 11:25:41.788816   21423 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0331 11:25:41.788908   21423 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0331 11:25:41.788960   21423 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0331 11:25:41.789021   21423 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0331 11:25:41.789078   21423 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0331 11:25:41.789110   21423 kubeadm.go:322] [certs] Using the existing "sa" key
	I0331 11:25:41.789146   21423 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:25:42.012536   21423 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:25:42.163046   21423 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:25:42.241784   21423 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:25:42.536134   21423 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:25:42.536733   21423 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:25:42.558306   21423 out.go:204]   - Booting up control plane ...
	I0331 11:25:42.558531   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:25:42.558661   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:25:42.558782   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:25:42.558955   21423 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:25:42.559186   21423 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:26:22.542983   21423 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 11:26:22.543923   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:26:22.544165   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:26:27.545859   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:26:27.546117   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:26:37.547896   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:26:37.548104   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:26:57.547453   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:26:57.547604   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:27:37.546901   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:27:37.547043   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:27:37.547055   21423 kubeadm.go:322] 
	I0331 11:27:37.547094   21423 kubeadm.go:322] Unfortunately, an error has occurred:
	I0331 11:27:37.547123   21423 kubeadm.go:322] 	timed out waiting for the condition
	I0331 11:27:37.547127   21423 kubeadm.go:322] 
	I0331 11:27:37.547160   21423 kubeadm.go:322] This error is likely caused by:
	I0331 11:27:37.547183   21423 kubeadm.go:322] 	- The kubelet is not running
	I0331 11:27:37.547260   21423 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 11:27:37.547270   21423 kubeadm.go:322] 
	I0331 11:27:37.547385   21423 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 11:27:37.547423   21423 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0331 11:27:37.547449   21423 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0331 11:27:37.547453   21423 kubeadm.go:322] 
	I0331 11:27:37.547530   21423 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 11:27:37.547602   21423 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0331 11:27:37.547676   21423 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0331 11:27:37.547721   21423 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0331 11:27:37.547786   21423 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0331 11:27:37.547812   21423 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0331 11:27:37.550772   21423 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 11:27:37.550830   21423 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 11:27:37.550912   21423 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
	I0331 11:27:37.551023   21423 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:27:37.551103   21423 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 11:27:37.551161   21423 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0331 11:27:37.551187   21423 kubeadm.go:403] StartCluster complete in 8m4.201191772s
	I0331 11:27:37.551286   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:27:37.570650   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.570663   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:27:37.570734   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:27:37.589714   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.589727   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:27:37.589792   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:27:37.610984   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.610998   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:27:37.611068   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:27:37.633571   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.633584   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:27:37.633657   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:27:37.654128   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.654143   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:27:37.654221   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:27:37.675062   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.675075   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:27:37.675141   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:27:37.694415   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.694429   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:27:37.694498   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:27:37.716996   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.717013   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:27:37.717021   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:27:37.717029   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:27:37.762318   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:27:37.762351   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:27:37.778195   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:27:37.778211   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:27:37.839666   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:27:37.839681   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:27:37.839688   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:27:37.866077   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:27:37.866096   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:27:39.918640   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052628923s)
	W0331 11:27:39.918781   21423 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0331 11:27:39.918806   21423 out.go:239] * 
	* 
	W0331 11:27:39.918945   21423 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 11:27:39.918981   21423 out.go:239] * 
	* 
	W0331 11:27:39.919924   21423 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0331 11:27:40.013221   21423 out.go:177] 
	W0331 11:27:40.087443   21423 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 11:27:40.087525   21423 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0331 11:27:40.087581   21423 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0331 11:27:40.108091   21423 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-221000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-221000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-221000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c",
	        "Created": "2023-03-31T18:13:14.794492262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:19:17.579830346Z",
	            "FinishedAt": "2023-03-31T18:19:14.577555049Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hosts",
	        "LogPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c-json.log",
	        "Name": "/old-k8s-version-221000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-221000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-221000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-221000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-221000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-221000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0456e7b7510bb75cc0d831a39cb0499c70c9c7a3e36cf7af9c3693387f85c05",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53597"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53598"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53600"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53601"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a0456e7b7510",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-221000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0bb0a05e1404",
	                        "old-k8s-version-221000"
	                    ],
	                    "NetworkID": "1369008204ce2a861d531490c08c0f4f11e7797b90e56bf4d65905b433bee06b",
	                    "EndpointID": "298def5630fe6d14ed76667224bda0c3f5879d4b90bc4725c120d066e1d67a98",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 2 (450.13273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-221000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-221000 logs -n 25: (3.657324799s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-346000 sudo                                 | kubenet-346000         | jenkins | v1.29.0 | 31 Mar 23 11:14 PDT | 31 Mar 23 11:14 PDT |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-346000 sudo                                 | kubenet-346000         | jenkins | v1.29.0 | 31 Mar 23 11:14 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-346000 sudo                                 | kubenet-346000         | jenkins | v1.29.0 | 31 Mar 23 11:14 PDT | 31 Mar 23 11:14 PDT |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-346000 sudo find                            | kubenet-346000         | jenkins | v1.29.0 | 31 Mar 23 11:14 PDT | 31 Mar 23 11:14 PDT |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-346000 sudo crio                            | kubenet-346000         | jenkins | v1.29.0 | 31 Mar 23 11:14 PDT | 31 Mar 23 11:14 PDT |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-346000                                      | kubenet-346000         | jenkins | v1.29.0 | 31 Mar 23 11:14 PDT | 31 Mar 23 11:14 PDT |
	| start   | -p no-preload-374000                                   | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:14 PDT | 31 Mar 23 11:15 PDT |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.0-rc.0                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-374000             | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:15 PDT | 31 Mar 23 11:15 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-374000                                   | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:15 PDT | 31 Mar 23 11:15 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-374000                  | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:15 PDT | 31 Mar 23 11:15 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-374000                                   | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:15 PDT | 31 Mar 23 11:21 PDT |
	|         | --memory=2200 --alsologtostderr                        |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.0-rc.0                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-221000        | old-k8s-version-221000 | jenkins | v1.29.0 | 31 Mar 23 11:17 PDT |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-221000                              | old-k8s-version-221000 | jenkins | v1.29.0 | 31 Mar 23 11:19 PDT | 31 Mar 23 11:19 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-221000             | old-k8s-version-221000 | jenkins | v1.29.0 | 31 Mar 23 11:19 PDT | 31 Mar 23 11:19 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-221000                              | old-k8s-version-221000 | jenkins | v1.29.0 | 31 Mar 23 11:19 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p no-preload-374000 sudo                              | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-374000                                   | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-374000                                   | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-374000                                   | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	| delete  | -p no-preload-374000                                   | no-preload-374000      | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	| start   | -p embed-certs-877000                                  | embed-certs-877000     | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:22 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-877000            | embed-certs-877000     | jenkins | v1.29.0 | 31 Mar 23 11:22 PDT | 31 Mar 23 11:22 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-877000                                  | embed-certs-877000     | jenkins | v1.29.0 | 31 Mar 23 11:22 PDT | 31 Mar 23 11:22 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877000                 | embed-certs-877000     | jenkins | v1.29.0 | 31 Mar 23 11:22 PDT | 31 Mar 23 11:22 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-877000                                  | embed-certs-877000     | jenkins | v1.29.0 | 31 Mar 23 11:22 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 11:22:38
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 11:22:38.593979   22066 out.go:296] Setting OutFile to fd 1 ...
	I0331 11:22:38.594167   22066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:22:38.594173   22066 out.go:309] Setting ErrFile to fd 2...
	I0331 11:22:38.594177   22066 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:22:38.594307   22066 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 11:22:38.595920   22066 out.go:303] Setting JSON to false
	I0331 11:22:38.616061   22066 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4926,"bootTime":1680282032,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 11:22:38.616182   22066 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 11:22:38.638127   22066 out.go:177] * [embed-certs-877000] minikube v1.29.0 on Darwin 13.3
	I0331 11:22:38.680334   22066 notify.go:220] Checking for updates...
	I0331 11:22:38.680359   22066 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 11:22:38.702276   22066 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:22:38.722953   22066 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 11:22:38.743901   22066 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 11:22:38.764858   22066 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 11:22:38.786048   22066 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 11:22:38.807257   22066 config.go:182] Loaded profile config "embed-certs-877000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 11:22:38.807631   22066 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 11:22:38.871819   22066 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 11:22:38.871971   22066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:22:39.060497   22066 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-31 18:22:38.9252649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServer
Address:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:/
Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:22:39.082184   22066 out.go:177] * Using the docker driver based on existing profile
	I0331 11:22:39.104163   22066 start.go:295] selected driver: docker
	I0331 11:22:39.104188   22066 start.go:859] validating driver "docker" against &{Name:embed-certs-877000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:embed-certs-877000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:22:39.104344   22066 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 11:22:39.108477   22066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:22:39.294640   22066 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-31 18:22:39.161218729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:22:39.294790   22066 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0331 11:22:39.294810   22066 cni.go:84] Creating CNI manager for ""
	I0331 11:22:39.294821   22066 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:22:39.294836   22066 start_flags.go:319] config:
	{Name:embed-certs-877000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:embed-certs-877000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:22:39.338366   22066 out.go:177] * Starting control plane node embed-certs-877000 in cluster embed-certs-877000
	I0331 11:22:39.359602   22066 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 11:22:39.381539   22066 out.go:177] * Pulling base image ...
	I0331 11:22:39.423524   22066 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 11:22:39.423572   22066 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 11:22:39.423668   22066 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0331 11:22:39.423686   22066 cache.go:57] Caching tarball of preloaded images
	I0331 11:22:39.423904   22066 preload.go:174] Found /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0331 11:22:39.423926   22066 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.3 on docker
	I0331 11:22:39.424965   22066 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/config.json ...
	I0331 11:22:39.483493   22066 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon, skipping pull
	I0331 11:22:39.483513   22066 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in daemon, skipping load
	I0331 11:22:39.483548   22066 cache.go:193] Successfully downloaded all kic artifacts
	I0331 11:22:39.483599   22066 start.go:364] acquiring machines lock for embed-certs-877000: {Name:mk136b206a4e2938ee9bb58405f58caf37cfb148 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 11:22:39.483683   22066 start.go:368] acquired machines lock for "embed-certs-877000" in 65.26µs
	I0331 11:22:39.483707   22066 start.go:96] Skipping create...Using existing machine configuration
	I0331 11:22:39.483714   22066 fix.go:55] fixHost starting: 
	I0331 11:22:39.483958   22066 cli_runner.go:164] Run: docker container inspect embed-certs-877000 --format={{.State.Status}}
	I0331 11:22:39.543799   22066 fix.go:103] recreateIfNeeded on embed-certs-877000: state=Stopped err=<nil>
	W0331 11:22:39.543830   22066 fix.go:129] unexpected machine state, will restart: <nil>
	I0331 11:22:39.565690   22066 out.go:177] * Restarting existing docker container for "embed-certs-877000" ...
	I0331 11:22:36.758474   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047593123s)
	I0331 11:22:36.758588   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:36.758596   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:36.796739   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:36.796753   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:36.809101   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:36.809116   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:36.863099   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:36.863111   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:36.863118   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:39.387501   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:39.549543   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:39.570732   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.570746   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:39.570823   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:39.590692   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.590704   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:39.590757   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:39.610330   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.610343   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:39.610415   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:39.630556   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.630570   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:39.630631   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:39.651980   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.651997   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:39.652064   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:39.673061   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.673075   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:39.673146   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:39.693030   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.693045   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:39.693114   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:39.713529   21423 logs.go:277] 0 containers: []
	W0331 11:22:39.713543   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:39.713558   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:39.713569   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:39.586200   22066 cli_runner.go:164] Run: docker start embed-certs-877000
	I0331 11:22:39.929218   22066 cli_runner.go:164] Run: docker container inspect embed-certs-877000 --format={{.State.Status}}
	I0331 11:22:39.992565   22066 kic.go:426] container "embed-certs-877000" state is running.
	I0331 11:22:39.993146   22066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-877000
	I0331 11:22:40.059394   22066 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/config.json ...
	I0331 11:22:40.059838   22066 machine.go:88] provisioning docker machine ...
	I0331 11:22:40.059865   22066 ubuntu.go:169] provisioning hostname "embed-certs-877000"
	I0331 11:22:40.059934   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:40.124950   22066 main.go:141] libmachine: Using SSH client type: native
	I0331 11:22:40.125435   22066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53724 <nil> <nil>}
	I0331 11:22:40.125450   22066 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-877000 && echo "embed-certs-877000" | sudo tee /etc/hostname
	I0331 11:22:40.278041   22066 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-877000
	
	I0331 11:22:40.278141   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:40.340725   22066 main.go:141] libmachine: Using SSH client type: native
	I0331 11:22:40.341140   22066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53724 <nil> <nil>}
	I0331 11:22:40.341154   22066 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-877000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-877000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-877000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 11:22:40.476017   22066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:22:40.476044   22066 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16144-2324/.minikube CaCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16144-2324/.minikube}
	I0331 11:22:40.476068   22066 ubuntu.go:177] setting up certificates
	I0331 11:22:40.476078   22066 provision.go:83] configureAuth start
	I0331 11:22:40.476153   22066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-877000
	I0331 11:22:40.535899   22066 provision.go:138] copyHostCerts
	I0331 11:22:40.536010   22066 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem, removing ...
	I0331 11:22:40.536025   22066 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 11:22:40.536117   22066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem (1078 bytes)
	I0331 11:22:40.536334   22066 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem, removing ...
	I0331 11:22:40.536342   22066 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 11:22:40.536403   22066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem (1123 bytes)
	I0331 11:22:40.536559   22066 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem, removing ...
	I0331 11:22:40.536564   22066 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 11:22:40.536624   22066 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem (1679 bytes)
	I0331 11:22:40.536753   22066 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem org=jenkins.embed-certs-877000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-877000]
	I0331 11:22:40.634449   22066 provision.go:172] copyRemoteCerts
	I0331 11:22:40.634501   22066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 11:22:40.634551   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:40.697088   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	I0331 11:22:40.793077   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 11:22:40.810322   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0331 11:22:40.827594   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0331 11:22:40.845432   22066 provision.go:86] duration metric: configureAuth took 369.359857ms
	I0331 11:22:40.845445   22066 ubuntu.go:193] setting minikube options for container-runtime
	I0331 11:22:40.845590   22066 config.go:182] Loaded profile config "embed-certs-877000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 11:22:40.845651   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:40.906785   22066 main.go:141] libmachine: Using SSH client type: native
	I0331 11:22:40.907116   22066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53724 <nil> <nil>}
	I0331 11:22:40.907125   22066 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 11:22:41.041763   22066 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0331 11:22:41.041777   22066 ubuntu.go:71] root file system type: overlay
	I0331 11:22:41.041874   22066 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 11:22:41.041955   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:41.102824   22066 main.go:141] libmachine: Using SSH client type: native
	I0331 11:22:41.103162   22066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53724 <nil> <nil>}
	I0331 11:22:41.103210   22066 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 11:22:41.246499   22066 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 11:22:41.246606   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:41.306400   22066 main.go:141] libmachine: Using SSH client type: native
	I0331 11:22:41.306737   22066 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 53724 <nil> <nil>}
	I0331 11:22:41.306751   22066 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 11:22:41.446197   22066 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:22:41.446215   22066 machine.go:91] provisioned docker machine in 1.386437323s
	I0331 11:22:41.446228   22066 start.go:300] post-start starting for "embed-certs-877000" (driver="docker")
	I0331 11:22:41.446234   22066 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 11:22:41.446313   22066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 11:22:41.446370   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:41.506682   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	I0331 11:22:41.602587   22066 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 11:22:41.606245   22066 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0331 11:22:41.606260   22066 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0331 11:22:41.606267   22066 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0331 11:22:41.606272   22066 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0331 11:22:41.606280   22066 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/addons for local assets ...
	I0331 11:22:41.606364   22066 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/files for local assets ...
	I0331 11:22:41.606518   22066 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> 28002.pem in /etc/ssl/certs
	I0331 11:22:41.606681   22066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 11:22:41.614012   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:22:41.631521   22066 start.go:303] post-start completed in 185.293264ms
	I0331 11:22:41.631598   22066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 11:22:41.631674   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:41.692635   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	I0331 11:22:41.783177   22066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0331 11:22:41.788493   22066 fix.go:57] fixHost completed within 2.304884595s
	I0331 11:22:41.788534   22066 start.go:83] releasing machines lock for "embed-certs-877000", held for 2.304952423s
	I0331 11:22:41.788668   22066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-877000
	I0331 11:22:41.852418   22066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 11:22:41.852418   22066 ssh_runner.go:195] Run: cat /version.json
	I0331 11:22:41.852520   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:41.852521   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:41.919442   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	I0331 11:22:41.919620   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	W0331 11:22:42.064443   22066 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	I0331 11:22:42.064523   22066 ssh_runner.go:195] Run: systemctl --version
	I0331 11:22:42.069561   22066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0331 11:22:42.074612   22066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0331 11:22:42.090142   22066 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0331 11:22:42.090216   22066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0331 11:22:42.097902   22066 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0331 11:22:42.097920   22066 start.go:481] detecting cgroup driver to use...
	I0331 11:22:42.097939   22066 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:22:42.098010   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:22:42.111380   22066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0331 11:22:42.120109   22066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 11:22:42.128810   22066 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 11:22:42.128865   22066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 11:22:42.137485   22066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:22:42.145972   22066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 11:22:42.154432   22066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:22:42.163144   22066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 11:22:42.170991   22066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 11:22:42.179751   22066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 11:22:42.186883   22066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 11:22:42.194164   22066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:22:42.264050   22066 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 11:22:42.335338   22066 start.go:481] detecting cgroup driver to use...
	I0331 11:22:42.335369   22066 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:22:42.335444   22066 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 11:22:42.348380   22066 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0331 11:22:42.348446   22066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 11:22:42.358817   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:22:42.372689   22066 ssh_runner.go:195] Run: which cri-dockerd
	I0331 11:22:42.377269   22066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 11:22:42.385387   22066 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 11:22:42.401016   22066 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 11:22:42.506956   22066 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 11:22:42.568174   22066 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 11:22:42.568197   22066 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 11:22:42.602831   22066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:22:42.691748   22066 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:22:43.065128   22066 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 11:22:43.137330   22066 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0331 11:22:43.207191   22066 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 11:22:43.282019   22066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:22:43.350913   22066 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0331 11:22:43.362929   22066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:22:43.431800   22066 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0331 11:22:43.517572   22066 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0331 11:22:43.517684   22066 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0331 11:22:43.522484   22066 start.go:549] Will wait 60s for crictl version
	I0331 11:22:43.522547   22066 ssh_runner.go:195] Run: which crictl
	I0331 11:22:43.526748   22066 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0331 11:22:43.557743   22066 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.2
	RuntimeApiVersion:  v1alpha2
	I0331 11:22:43.557819   22066 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:22:43.583341   22066 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:22:41.758774   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045294043s)
	I0331 11:22:41.758888   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:41.758897   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:41.797644   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:41.797660   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:41.811325   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:41.811342   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:41.871317   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:41.871329   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:41.871337   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:44.400713   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:44.551389   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:44.572314   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.572328   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:44.572395   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:44.591776   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.591790   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:44.591860   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:44.611928   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.611941   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:44.612021   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:44.631330   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.631343   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:44.631407   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:44.650381   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.650394   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:44.650467   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:44.670175   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.670188   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:44.670254   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:44.690318   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.690331   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:44.690397   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:44.710044   21423 logs.go:277] 0 containers: []
	W0331 11:22:44.710058   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:44.710065   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:44.710075   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:43.638457   22066 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
	I0331 11:22:43.638654   22066 cli_runner.go:164] Run: docker exec -t embed-certs-877000 dig +short host.docker.internal
	I0331 11:22:43.757896   22066 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0331 11:22:43.758037   22066 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0331 11:22:43.762334   22066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:22:43.772408   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:43.833542   22066 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 11:22:43.833624   22066 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:22:43.853717   22066 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0331 11:22:43.853739   22066 docker.go:569] Images already preloaded, skipping extraction
	I0331 11:22:43.853825   22066 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:22:43.874321   22066 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0331 11:22:43.874340   22066 cache_images.go:84] Images are preloaded, skipping loading
	I0331 11:22:43.874422   22066 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 11:22:43.900939   22066 cni.go:84] Creating CNI manager for ""
	I0331 11:22:43.900957   22066 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:22:43.900978   22066 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 11:22:43.900993   22066 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-877000 NodeName:embed-certs-877000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 11:22:43.901102   22066 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-877000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 11:22:43.901180   22066 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-877000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:embed-certs-877000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 11:22:43.901243   22066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0331 11:22:43.909062   22066 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 11:22:43.909126   22066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 11:22:43.916696   22066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0331 11:22:43.929607   22066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0331 11:22:43.942624   22066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0331 11:22:43.955821   22066 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0331 11:22:43.959716   22066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:22:43.970005   22066 certs.go:56] Setting up /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000 for IP: 192.168.67.2
	I0331 11:22:43.970034   22066 certs.go:186] acquiring lock for shared ca certs: {Name:mk1ddc355573fb6044e73c93dd0e9bf4bae32052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:22:43.970214   22066 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key
	I0331 11:22:43.970282   22066 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key
	I0331 11:22:43.970397   22066 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/client.key
	I0331 11:22:43.970469   22066 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/apiserver.key.c7fa3a9e
	I0331 11:22:43.970519   22066 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/proxy-client.key
	I0331 11:22:43.970750   22066 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem (1338 bytes)
	W0331 11:22:43.970794   22066 certs.go:397] ignoring /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800_empty.pem, impossibly tiny 0 bytes
	I0331 11:22:43.970814   22066 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem (1679 bytes)
	I0331 11:22:43.970850   22066 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem (1078 bytes)
	I0331 11:22:43.970885   22066 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem (1123 bytes)
	I0331 11:22:43.970915   22066 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem (1679 bytes)
	I0331 11:22:43.970996   22066 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:22:43.971595   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 11:22:43.989929   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0331 11:22:44.008747   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 11:22:44.027138   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/embed-certs-877000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0331 11:22:44.045366   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 11:22:44.063717   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0331 11:22:44.081274   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 11:22:44.098666   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 11:22:44.116188   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 11:22:44.134016   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem --> /usr/share/ca-certificates/2800.pem (1338 bytes)
	I0331 11:22:44.151394   22066 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /usr/share/ca-certificates/28002.pem (1708 bytes)
	I0331 11:22:44.168782   22066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 11:22:44.181676   22066 ssh_runner.go:195] Run: openssl version
	I0331 11:22:44.187080   22066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 11:22:44.195136   22066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:22:44.199338   22066 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:22:44.199389   22066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:22:44.204677   22066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 11:22:44.212291   22066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2800.pem && ln -fs /usr/share/ca-certificates/2800.pem /etc/ssl/certs/2800.pem"
	I0331 11:22:44.220536   22066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2800.pem
	I0331 11:22:44.224702   22066 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/2800.pem
	I0331 11:22:44.224745   22066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2800.pem
	I0331 11:22:44.230311   22066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2800.pem /etc/ssl/certs/51391683.0"
	I0331 11:22:44.237926   22066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28002.pem && ln -fs /usr/share/ca-certificates/28002.pem /etc/ssl/certs/28002.pem"
	I0331 11:22:44.246026   22066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28002.pem
	I0331 11:22:44.250020   22066 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/28002.pem
	I0331 11:22:44.250069   22066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28002.pem
	I0331 11:22:44.255457   22066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28002.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 11:22:44.263096   22066 kubeadm.go:401] StartCluster: {Name:embed-certs-877000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:embed-certs-877000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:22:44.263206   22066 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:22:44.283157   22066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 11:22:44.291085   22066 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0331 11:22:44.291108   22066 kubeadm.go:633] restartCluster start
	I0331 11:22:44.291157   22066 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0331 11:22:44.298456   22066 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:44.298526   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:22:44.360669   22066 kubeconfig.go:135] verify returned: extract IP: "embed-certs-877000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:22:44.360828   22066 kubeconfig.go:146] "embed-certs-877000" context is missing from /Users/jenkins/minikube-integration/16144-2324/kubeconfig - will repair!
	I0331 11:22:44.361141   22066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/kubeconfig: {Name:mkc0b1389479e511140b6b42bee4e1f98dfd2b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:22:44.362715   22066 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0331 11:22:44.370883   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:44.370937   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:44.380174   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:44.880933   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:44.881102   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:44.892112   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:45.381586   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:45.381699   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:45.392715   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:45.880448   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:45.880599   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:45.891891   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:46.382195   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:46.382412   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:46.393516   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:46.880900   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:46.880968   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:46.889734   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:47.382144   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:47.382357   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:47.393736   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:47.880874   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:47.881043   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:47.891908   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:48.382132   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:48.382286   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:48.393459   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:46.753388   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043402554s)
	I0331 11:22:46.753499   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:46.753507   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:46.792982   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:46.792997   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:46.805753   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:46.805770   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:46.861145   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:46.861159   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:46.861166   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:49.385848   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:49.551125   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:49.573615   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.573629   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:49.573695   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:49.592828   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.592841   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:49.592907   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:49.612151   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.612165   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:49.612231   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:49.631432   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.631446   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:49.631516   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:49.649789   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.649803   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:49.649870   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:49.668617   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.668630   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:49.668696   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:49.689002   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.689015   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:49.689080   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:49.708143   21423 logs.go:277] 0 containers: []
	W0331 11:22:49.708155   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:49.708162   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:49.708170   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:49.732613   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:49.732626   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:48.882081   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:48.882284   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:48.893592   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:49.382044   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:49.382215   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:49.393481   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:49.880554   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:49.880692   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:49.891697   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:50.380160   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:50.380324   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:50.391683   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:50.881435   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:50.881567   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:50.892738   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:51.381945   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:51.382129   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:51.393220   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:51.879947   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:51.880046   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:51.889916   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:52.380160   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:52.380367   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:52.391799   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:52.881898   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:52.882113   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:52.893802   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:53.382000   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:53.382115   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:53.394262   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:51.780967   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048432075s)
	I0331 11:22:51.781076   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:51.781084   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:51.818484   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:51.818498   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:51.830558   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:51.830571   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:51.886014   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:54.388001   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:54.548741   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:54.569439   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.569457   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:54.569545   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:54.589915   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.589929   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:54.589997   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:54.609247   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.609261   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:54.609327   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:54.634600   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.634614   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:54.634682   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:54.654624   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.654637   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:54.654707   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:54.673470   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.673500   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:54.673577   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:54.692817   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.692832   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:54.692902   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:54.711886   21423 logs.go:277] 0 containers: []
	W0331 11:22:54.711899   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:54.711906   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:54.711917   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:22:53.880961   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:53.881100   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:53.892144   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:54.380635   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:54.380787   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:54.392238   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:54.392248   22066 api_server.go:165] Checking apiserver status ...
	I0331 11:22:54.392294   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:22:54.400909   22066 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:54.400922   22066 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0331 11:22:54.400931   22066 kubeadm.go:1120] stopping kube-system containers ...
	I0331 11:22:54.401004   22066 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:22:54.422980   22066 docker.go:465] Stopping containers: [cf08432cfef3 58fbcccdac6d 87c7a5d9446c 3706f32a7dcb c4875ee5d549 04021de8ac69 6472c5bfd5b2 a81ce49ddf77 36621972e066 f3d765e63f1e 56a79bdf704d d7565ed3ffcd 0d4b8c7d41bd a6029a36efbe 18755750041f cc026e1c704f]
	I0331 11:22:54.423059   22066 ssh_runner.go:195] Run: docker stop cf08432cfef3 58fbcccdac6d 87c7a5d9446c 3706f32a7dcb c4875ee5d549 04021de8ac69 6472c5bfd5b2 a81ce49ddf77 36621972e066 f3d765e63f1e 56a79bdf704d d7565ed3ffcd 0d4b8c7d41bd a6029a36efbe 18755750041f cc026e1c704f
	I0331 11:22:54.444307   22066 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0331 11:22:54.455060   22066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:22:54.463148   22066 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 31 18:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 31 18:21 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Mar 31 18:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Mar 31 18:21 /etc/kubernetes/scheduler.conf
	
	I0331 11:22:54.463211   22066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0331 11:22:54.471514   22066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0331 11:22:54.479442   22066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0331 11:22:54.487323   22066 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:54.487391   22066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0331 11:22:54.495514   22066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0331 11:22:54.503789   22066 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:22:54.503853   22066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0331 11:22:54.511740   22066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:22:54.520123   22066 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0331 11:22:54.520138   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:22:54.573630   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:22:55.186267   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:22:55.324485   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:22:55.392765   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:22:55.496257   22066 api_server.go:51] waiting for apiserver process to appear ...
	I0331 11:22:55.496342   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:56.009626   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:56.508626   22066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:56.521095   22066 api_server.go:71] duration metric: took 1.024906056s to wait for apiserver process to appear ...
	I0331 11:22:56.521111   22066 api_server.go:87] waiting for apiserver healthz status ...
	I0331 11:22:56.521129   22066 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53723/healthz ...
	I0331 11:22:56.522372   22066 api_server.go:268] stopped: https://127.0.0.1:53723/healthz: Get "https://127.0.0.1:53723/healthz": EOF
	I0331 11:22:57.023167   22066 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53723/healthz ...
	I0331 11:22:58.750092   22066 api_server.go:278] https://127.0.0.1:53723/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0331 11:22:58.750110   22066 api_server.go:102] status: https://127.0.0.1:53723/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0331 11:22:59.022713   22066 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53723/healthz ...
	I0331 11:22:59.027943   22066 api_server.go:278] https://127.0.0.1:53723/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0331 11:22:59.027962   22066 api_server.go:102] status: https://127.0.0.1:53723/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0331 11:22:59.524405   22066 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53723/healthz ...
	I0331 11:22:59.531416   22066 api_server.go:278] https://127.0.0.1:53723/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0331 11:22:59.531429   22066 api_server.go:102] status: https://127.0.0.1:53723/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0331 11:23:00.022566   22066 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53723/healthz ...
	I0331 11:23:00.029675   22066 api_server.go:278] https://127.0.0.1:53723/healthz returned 200:
	ok
	I0331 11:23:00.036767   22066 api_server.go:140] control plane version: v1.26.3
	I0331 11:23:00.036783   22066 api_server.go:130] duration metric: took 3.515839488s to wait for apiserver health ...
	I0331 11:23:00.036792   22066 cni.go:84] Creating CNI manager for ""
	I0331 11:23:00.036820   22066 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:23:00.058880   22066 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0331 11:22:56.754826   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.042995424s)
	I0331 11:22:56.754931   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:56.754939   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:56.796025   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:56.796049   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:56.812656   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:56.812671   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:56.873895   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:56.873907   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:56.873914   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:59.400245   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:22:59.548658   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:22:59.571672   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.571685   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:22:59.571781   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:22:59.591528   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.591541   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:22:59.591612   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:22:59.611067   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.611081   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:22:59.611148   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:22:59.630062   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.630076   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:22:59.630144   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:22:59.649183   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.649205   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:22:59.649289   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:22:59.668767   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.668780   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:22:59.668848   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:22:59.687528   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.687541   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:22:59.687607   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:22:59.707449   21423 logs.go:277] 0 containers: []
	W0331 11:22:59.707462   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:22:59.707468   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:22:59.707477   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:22:59.745586   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:22:59.745607   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:22:59.759082   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:22:59.759100   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:22:59.834730   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:22:59.834751   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:22:59.834759   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:22:59.860931   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:22:59.860947   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:00.080401   22066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0331 11:23:00.090277   22066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0331 11:23:00.103408   22066 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 11:23:00.110463   22066 system_pods.go:59] 8 kube-system pods found
	I0331 11:23:00.110480   22066 system_pods.go:61] "coredns-787d4945fb-l5ngt" [a8b100ba-3b34-4ce9-98ed-edf98636e77e] Running
	I0331 11:23:00.110487   22066 system_pods.go:61] "etcd-embed-certs-877000" [d5bbbff3-7de8-4575-8bde-d1219e2a04ea] Running
	I0331 11:23:00.110497   22066 system_pods.go:61] "kube-apiserver-embed-certs-877000" [396f76a6-0fd0-422d-b440-cf204c49e82c] Running
	I0331 11:23:00.110510   22066 system_pods.go:61] "kube-controller-manager-embed-certs-877000" [dc08134e-f3e7-4332-ba9c-34038f7a2354] Running
	I0331 11:23:00.110519   22066 system_pods.go:61] "kube-proxy-glshl" [35f9a517-73db-4f4a-bd1b-77d491555f16] Running
	I0331 11:23:00.110522   22066 system_pods.go:61] "kube-scheduler-embed-certs-877000" [c0ec16b1-e8db-4fe4-84df-564aaeced1d6] Running
	I0331 11:23:00.110530   22066 system_pods.go:61] "metrics-server-cc4f5f75f-bf8bb" [8b8757c2-efb3-4292-9b93-be27858e4d0d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0331 11:23:00.110539   22066 system_pods.go:61] "storage-provisioner" [84fc86e0-fe4d-40f3-af22-a6d85cdc8a78] Running
	I0331 11:23:00.110544   22066 system_pods.go:74] duration metric: took 7.125724ms to wait for pod list to return data ...
	I0331 11:23:00.110550   22066 node_conditions.go:102] verifying NodePressure condition ...
	I0331 11:23:00.113808   22066 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0331 11:23:00.113820   22066 node_conditions.go:123] node cpu capacity is 6
	I0331 11:23:00.113830   22066 node_conditions.go:105] duration metric: took 3.275638ms to run NodePressure ...
	I0331 11:23:00.113841   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:23:00.242514   22066 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0331 11:23:00.246801   22066 retry.go:31] will retry after 345.88196ms: kubelet not initialised
	I0331 11:23:00.600131   22066 retry.go:31] will retry after 550.608226ms: kubelet not initialised
	I0331 11:23:01.157600   22066 retry.go:31] will retry after 346.683738ms: kubelet not initialised
	I0331 11:23:01.510450   22066 retry.go:31] will retry after 1.036719132s: kubelet not initialised
	I0331 11:23:02.551648   22066 kubeadm.go:784] kubelet initialised
	I0331 11:23:02.551662   22066 kubeadm.go:785] duration metric: took 2.309249871s waiting for restarted kubelet to initialise ...
	I0331 11:23:02.551670   22066 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 11:23:02.556518   22066 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-l5ngt" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.561426   22066 pod_ready.go:92] pod "coredns-787d4945fb-l5ngt" in "kube-system" namespace has status "Ready":"True"
	I0331 11:23:02.561436   22066 pod_ready.go:81] duration metric: took 4.906129ms waiting for pod "coredns-787d4945fb-l5ngt" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.561443   22066 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-877000" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.591485   22066 pod_ready.go:92] pod "etcd-embed-certs-877000" in "kube-system" namespace has status "Ready":"True"
	I0331 11:23:02.591499   22066 pod_ready.go:81] duration metric: took 30.05224ms waiting for pod "etcd-embed-certs-877000" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.591508   22066 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-877000" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.597858   22066 pod_ready.go:92] pod "kube-apiserver-embed-certs-877000" in "kube-system" namespace has status "Ready":"True"
	I0331 11:23:02.597869   22066 pod_ready.go:81] duration metric: took 6.356433ms waiting for pod "kube-apiserver-embed-certs-877000" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.597878   22066 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-877000" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.602953   22066 pod_ready.go:92] pod "kube-controller-manager-embed-certs-877000" in "kube-system" namespace has status "Ready":"True"
	I0331 11:23:02.602963   22066 pod_ready.go:81] duration metric: took 5.077136ms waiting for pod "kube-controller-manager-embed-certs-877000" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.602972   22066 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-glshl" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.951938   22066 pod_ready.go:92] pod "kube-proxy-glshl" in "kube-system" namespace has status "Ready":"True"
	I0331 11:23:02.951949   22066 pod_ready.go:81] duration metric: took 348.989643ms waiting for pod "kube-proxy-glshl" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:02.951956   22066 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-877000" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:01.908833   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047976847s)
	I0331 11:23:04.410288   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:04.548739   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:04.570362   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.570376   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:04.570446   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:04.590951   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.590964   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:04.591044   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:04.610733   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.610745   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:04.610809   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:04.629647   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.629662   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:04.629732   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:04.649186   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.649199   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:04.649268   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:04.668171   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.668184   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:04.668249   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:04.688656   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.688669   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:04.688735   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:04.707495   21423 logs.go:277] 0 containers: []
	W0331 11:23:04.707509   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:04.707517   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:04.707525   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:04.744682   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:04.744696   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:04.757029   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:04.757043   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:04.815382   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:04.815400   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:04.815407   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:04.843261   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:04.843280   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:05.361667   22066 pod_ready.go:102] pod "kube-scheduler-embed-certs-877000" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:07.861139   22066 pod_ready.go:102] pod "kube-scheduler-embed-certs-877000" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:06.889723   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046534007s)
	I0331 11:23:09.389955   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:09.548411   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:09.569634   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.569648   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:09.569722   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:09.589780   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.589794   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:09.589875   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:09.610251   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.610264   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:09.610334   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:09.631293   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.631307   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:09.631376   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:09.651338   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.651351   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:09.651419   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:09.671354   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.671366   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:09.671431   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:09.691838   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.691850   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:09.691919   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:09.710869   21423 logs.go:277] 0 containers: []
	W0331 11:23:09.710883   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:09.710891   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:09.710897   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:09.748387   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:09.748403   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:09.760528   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:09.760544   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:09.815205   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:09.815217   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:09.815224   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:09.841050   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:09.841064   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:08.860561   22066 pod_ready.go:92] pod "kube-scheduler-embed-certs-877000" in "kube-system" namespace has status "Ready":"True"
	I0331 11:23:08.860575   22066 pod_ready.go:81] duration metric: took 5.908910327s waiting for pod "kube-scheduler-embed-certs-877000" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:08.860581   22066 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace to be "Ready" ...
	I0331 11:23:10.872849   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:13.373547   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:11.889078   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048104155s)
	I0331 11:23:14.389278   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:14.547855   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:14.568170   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.568185   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:14.568254   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:14.587182   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.587196   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:14.587264   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:14.607141   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.607154   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:14.607220   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:14.626768   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.626781   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:14.626845   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:14.646419   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.646432   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:14.646512   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:14.666074   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.666087   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:14.666154   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:14.685705   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.685719   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:14.685787   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:14.705636   21423 logs.go:277] 0 containers: []
	W0331 11:23:14.705649   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:14.705656   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:14.705664   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:14.742416   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:14.742434   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:14.755049   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:14.755064   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:14.815109   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:14.815121   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:14.815128   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:14.841394   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:14.841411   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:15.873443   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:17.874463   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:16.886623   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045303023s)
	I0331 11:23:19.386735   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:19.547705   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:19.568772   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.568786   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:19.568857   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:19.588754   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.588769   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:19.588836   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:19.608619   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.608634   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:19.608702   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:19.628847   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.628861   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:19.628928   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:19.647575   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.647588   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:19.647653   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:19.666801   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.666815   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:19.666881   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:19.686586   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.686598   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:19.686665   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:19.705851   21423 logs.go:277] 0 containers: []
	W0331 11:23:19.705864   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:19.705871   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:19.705879   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:19.745708   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:19.745724   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:19.758179   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:19.758200   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:19.814044   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:19.814055   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:19.814063   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:19.840642   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:19.840660   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:20.372766   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:22.873463   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:21.886961   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046391826s)
	I0331 11:23:24.389125   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:24.547833   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:24.568815   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.568829   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:24.568897   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:24.588413   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.588427   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:24.588495   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:24.609351   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.609363   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:24.609437   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:24.629075   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.629088   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:24.629155   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:24.648570   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.648583   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:24.648653   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:24.668686   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.668701   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:24.668768   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:24.688604   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.688616   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:24.688689   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:24.708111   21423 logs.go:277] 0 containers: []
	W0331 11:23:24.708124   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:24.708131   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:24.708141   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:25.374317   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:27.872637   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:26.751781   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043728803s)
	I0331 11:23:26.751890   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:26.751899   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:26.789873   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:26.789892   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:26.802516   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:26.802533   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:26.857353   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:26.857366   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:26.857373   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:29.382363   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:29.547107   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:29.566804   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.566818   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:29.566889   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:29.588028   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.588041   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:29.588105   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:29.614722   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.614742   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:29.614815   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:29.633253   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.633268   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:29.633333   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:29.653839   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.653852   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:29.653916   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:29.673874   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.673888   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:29.673955   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:29.694426   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.694440   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:29.694506   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:29.713682   21423 logs.go:277] 0 containers: []
	W0331 11:23:29.713697   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:29.713709   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:29.713726   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:29.754068   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:29.754090   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:29.767831   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:29.767846   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:29.830629   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:29.830641   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:29.830649   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:29.856123   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:29.856139   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:30.372178   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:32.873253   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:31.904519   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048469725s)
	I0331 11:23:34.405590   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:34.547014   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:34.569730   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.569743   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:34.569809   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:34.590129   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.590143   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:34.590222   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:34.609190   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.609203   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:34.609271   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:34.627848   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.627861   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:34.627926   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:34.648147   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.648160   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:34.648227   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:34.668624   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.668636   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:34.668701   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:34.687703   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.687716   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:34.687783   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:34.707883   21423 logs.go:277] 0 containers: []
	W0331 11:23:34.707896   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:34.707903   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:34.707909   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:34.745065   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:34.745080   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:34.757393   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:34.757406   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:34.812552   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:34.812567   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:34.812574   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:34.838823   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:34.838839   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:35.372588   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:37.872821   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:36.886260   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047513153s)
	I0331 11:23:39.386842   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:39.546523   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:23:39.566342   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.566356   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:23:39.566427   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:23:39.586917   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.586929   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:23:39.586998   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:23:39.609561   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.609575   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:23:39.609642   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:23:39.629222   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.629240   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:23:39.629309   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:23:39.649860   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.649873   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:23:39.649941   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:23:39.669798   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.669812   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:23:39.669881   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:23:39.690535   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.690549   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:23:39.690616   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:23:39.710540   21423 logs.go:277] 0 containers: []
	W0331 11:23:39.710553   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:23:39.710560   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:23:39.710568   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:23:39.748068   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:23:39.748083   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:23:39.759938   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:23:39.759951   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:23:39.815348   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:23:39.815360   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:23:39.815368   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:23:39.840631   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:23:39.840645   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:23:40.372744   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:42.871044   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:41.889540   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04898731s)
	I0331 11:23:44.389901   21423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:23:44.547243   21423 kubeadm.go:637] restartCluster took 4m11.157494112s
	W0331 11:23:44.547360   21423 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0331 11:23:44.547394   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0331 11:23:44.962986   21423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:23:44.973068   21423 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:23:44.980982   21423 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:23:44.981032   21423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:23:44.988742   21423 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:23:44.988772   21423 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:23:45.037554   21423 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0331 11:23:45.037612   21423 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:23:45.208508   21423 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:23:45.208638   21423 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:23:45.208731   21423 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:23:45.365604   21423 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:23:45.366337   21423 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:23:45.373561   21423 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0331 11:23:45.444598   21423 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:23:45.465999   21423 out.go:204]   - Generating certificates and keys ...
	I0331 11:23:45.466091   21423 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:23:45.466180   21423 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:23:45.466318   21423 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0331 11:23:45.466386   21423 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0331 11:23:45.466450   21423 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0331 11:23:45.466497   21423 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0331 11:23:45.466554   21423 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0331 11:23:45.466619   21423 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0331 11:23:45.466680   21423 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0331 11:23:45.466775   21423 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0331 11:23:45.466811   21423 kubeadm.go:322] [certs] Using the existing "sa" key
	I0331 11:23:45.466859   21423 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:23:45.548669   21423 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:23:45.652195   21423 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:23:45.763221   21423 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:23:45.855620   21423 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:23:45.856133   21423 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:23:45.877402   21423 out.go:204]   - Booting up control plane ...
	I0331 11:23:45.877492   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:23:45.877558   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:23:45.877617   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:23:45.877690   21423 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:23:45.877817   21423 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:23:44.871532   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:46.871525   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:49.371041   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:51.871009   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:54.370548   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:56.371147   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:23:58.871138   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:01.370429   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:03.870364   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:06.367951   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:08.371347   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:10.870947   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:13.370208   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:15.370452   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:17.870510   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:20.369118   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:22.369836   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:25.863271   21423 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 11:24:25.864251   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:24:25.864454   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:24:24.869667   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:27.368401   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:30.865616   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:24:30.865866   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:24:29.369138   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:31.867741   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:33.868126   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:35.868397   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:38.368598   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:40.865752   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:24:40.865911   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:24:40.368725   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:42.368995   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:44.866924   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:46.868535   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:49.367681   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:51.868634   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:54.367245   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:24:56.868501   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:00.865639   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:25:00.865820   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:24:59.366986   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:01.367667   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:03.866996   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:06.366709   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:08.867228   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:10.867346   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:13.364950   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:15.366752   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:17.864982   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:19.865904   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:21.867138   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:24.365525   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:26.866279   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:29.366066   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:31.866918   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:34.364934   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:36.364959   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:38.366104   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:40.864905   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:25:40.865082   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:25:40.865103   21423 kubeadm.go:322] 
	I0331 11:25:40.865148   21423 kubeadm.go:322] Unfortunately, an error has occurred:
	I0331 11:25:40.865184   21423 kubeadm.go:322] 	timed out waiting for the condition
	I0331 11:25:40.865191   21423 kubeadm.go:322] 
	I0331 11:25:40.865240   21423 kubeadm.go:322] This error is likely caused by:
	I0331 11:25:40.865268   21423 kubeadm.go:322] 	- The kubelet is not running
	I0331 11:25:40.865338   21423 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 11:25:40.865344   21423 kubeadm.go:322] 
	I0331 11:25:40.865456   21423 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 11:25:40.865483   21423 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0331 11:25:40.865506   21423 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0331 11:25:40.865510   21423 kubeadm.go:322] 
	I0331 11:25:40.865625   21423 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 11:25:40.865729   21423 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0331 11:25:40.865808   21423 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0331 11:25:40.865845   21423 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0331 11:25:40.865897   21423 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0331 11:25:40.865923   21423 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0331 11:25:40.868976   21423 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 11:25:40.869053   21423 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 11:25:40.869163   21423 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
	I0331 11:25:40.869275   21423 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:25:40.869348   21423 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 11:25:40.869414   21423 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0331 11:25:40.869542   21423 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0331 11:25:40.869583   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0331 11:25:41.282624   21423 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:25:41.292920   21423 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:25:41.292974   21423 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:25:41.300757   21423 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:25:41.300777   21423 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:25:41.349949   21423 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0331 11:25:41.350001   21423 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:25:41.523944   21423 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:25:41.524033   21423 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:25:41.524131   21423 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:25:41.683779   21423 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:25:41.684705   21423 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:25:41.691576   21423 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0331 11:25:41.766654   21423 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:25:41.788377   21423 out.go:204]   - Generating certificates and keys ...
	I0331 11:25:41.788466   21423 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:25:41.788531   21423 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:25:41.788592   21423 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0331 11:25:41.788672   21423 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0331 11:25:41.788757   21423 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0331 11:25:41.788816   21423 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0331 11:25:41.788908   21423 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0331 11:25:41.788960   21423 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0331 11:25:41.789021   21423 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0331 11:25:41.789078   21423 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0331 11:25:41.789110   21423 kubeadm.go:322] [certs] Using the existing "sa" key
	I0331 11:25:41.789146   21423 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:25:42.012536   21423 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:25:42.163046   21423 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:25:42.241784   21423 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:25:42.536134   21423 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:25:42.536733   21423 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:25:40.866769   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:43.365040   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:42.558306   21423 out.go:204]   - Booting up control plane ...
	I0331 11:25:42.558531   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:25:42.558661   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:25:42.558782   21423 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:25:42.558955   21423 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:25:42.559186   21423 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:25:45.864845   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:47.865465   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:49.866093   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:52.366126   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:54.863707   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:56.865754   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:25:59.364550   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:01.865853   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:04.363760   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:06.365623   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:08.864330   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:10.864387   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:12.864558   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:14.864479   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:17.362736   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:19.363685   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:21.363967   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:22.542983   21423 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0331 11:26:22.543923   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:26:22.544165   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:26:23.862883   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:25.863208   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:28.361810   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:27.545859   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:26:27.546117   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:26:30.362096   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:32.363283   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:34.363493   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:36.862521   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:37.547896   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:26:37.548104   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:26:39.362606   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:41.363072   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:43.862601   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:45.863579   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:48.362061   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:50.364017   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:52.861745   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:55.361724   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:57.363310   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:26:57.547453   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:26:57.547604   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:26:59.861229   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:27:01.861270   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:27:03.862549   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:27:06.362785   22066 pod_ready.go:102] pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace has status "Ready":"False"
	I0331 11:27:08.855453   22066 pod_ready.go:81] duration metric: took 4m0.006844873s waiting for pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace to be "Ready" ...
	E0331 11:27:08.855504   22066 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-cc4f5f75f-bf8bb" in "kube-system" namespace to be "Ready" (will not retry!)
	I0331 11:27:08.855575   22066 pod_ready.go:38] duration metric: took 4m6.316211751s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 11:27:08.855601   22066 kubeadm.go:637] restartCluster took 4m24.577715758s
	W0331 11:27:08.855704   22066 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0331 11:27:08.855741   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0331 11:27:13.083308   22066 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.227765064s)
	I0331 11:27:13.083379   22066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:27:13.093667   22066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:27:13.101454   22066 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0331 11:27:13.101513   22066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:27:13.109326   22066 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0331 11:27:13.109355   22066 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0331 11:27:13.158236   22066 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
	I0331 11:27:13.158281   22066 kubeadm.go:322] [preflight] Running pre-flight checks
	I0331 11:27:13.264205   22066 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0331 11:27:13.264294   22066 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0331 11:27:13.264367   22066 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0331 11:27:13.396249   22066 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0331 11:27:13.418686   22066 out.go:204]   - Generating certificates and keys ...
	I0331 11:27:13.418748   22066 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0331 11:27:13.418824   22066 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0331 11:27:13.418892   22066 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0331 11:27:13.418976   22066 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0331 11:27:13.419051   22066 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0331 11:27:13.419112   22066 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0331 11:27:13.419195   22066 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0331 11:27:13.419252   22066 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0331 11:27:13.419317   22066 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0331 11:27:13.419400   22066 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0331 11:27:13.419431   22066 kubeadm.go:322] [certs] Using the existing "sa" key
	I0331 11:27:13.419476   22066 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0331 11:27:13.548593   22066 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0331 11:27:13.679878   22066 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0331 11:27:13.779260   22066 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0331 11:27:14.187204   22066 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0331 11:27:14.197826   22066 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0331 11:27:14.198486   22066 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0331 11:27:14.198533   22066 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0331 11:27:14.268777   22066 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0331 11:27:14.290174   22066 out.go:204]   - Booting up control plane ...
	I0331 11:27:14.290269   22066 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0331 11:27:14.290357   22066 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0331 11:27:14.290421   22066 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0331 11:27:14.290489   22066 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0331 11:27:14.290654   22066 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0331 11:27:19.277120   22066 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002676 seconds
	I0331 11:27:19.277305   22066 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0331 11:27:19.286192   22066 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0331 11:27:19.803008   22066 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0331 11:27:19.803159   22066 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-877000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0331 11:27:20.312969   22066 kubeadm.go:322] [bootstrap-token] Using token: buwbw1.g9zy5i6htyczsmm0
	I0331 11:27:20.335271   22066 out.go:204]   - Configuring RBAC rules ...
	I0331 11:27:20.335398   22066 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0331 11:27:20.337943   22066 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0331 11:27:20.342762   22066 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0331 11:27:20.345222   22066 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0331 11:27:20.347645   22066 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0331 11:27:20.349965   22066 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0331 11:27:20.358505   22066 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0331 11:27:20.498032   22066 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0331 11:27:20.787382   22066 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0331 11:27:20.787407   22066 kubeadm.go:322] 
	I0331 11:27:20.787493   22066 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0331 11:27:20.787508   22066 kubeadm.go:322] 
	I0331 11:27:20.787643   22066 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0331 11:27:20.787656   22066 kubeadm.go:322] 
	I0331 11:27:20.787711   22066 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0331 11:27:20.787790   22066 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0331 11:27:20.787882   22066 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0331 11:27:20.787892   22066 kubeadm.go:322] 
	I0331 11:27:20.787966   22066 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0331 11:27:20.787978   22066 kubeadm.go:322] 
	I0331 11:27:20.788024   22066 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0331 11:27:20.788031   22066 kubeadm.go:322] 
	I0331 11:27:20.788097   22066 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0331 11:27:20.788199   22066 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0331 11:27:20.788303   22066 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0331 11:27:20.788321   22066 kubeadm.go:322] 
	I0331 11:27:20.788422   22066 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0331 11:27:20.788537   22066 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0331 11:27:20.788552   22066 kubeadm.go:322] 
	I0331 11:27:20.788673   22066 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token buwbw1.g9zy5i6htyczsmm0 \
	I0331 11:27:20.788807   22066 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:72b52de73b8df0b9cf3d3d236317efe9af979ce7654c1795e19815947d88a34b \
	I0331 11:27:20.788833   22066 kubeadm.go:322] 	--control-plane 
	I0331 11:27:20.788845   22066 kubeadm.go:322] 
	I0331 11:27:20.788910   22066 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0331 11:27:20.788915   22066 kubeadm.go:322] 
	I0331 11:27:20.788987   22066 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token buwbw1.g9zy5i6htyczsmm0 \
	I0331 11:27:20.789096   22066 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:72b52de73b8df0b9cf3d3d236317efe9af979ce7654c1795e19815947d88a34b 
	I0331 11:27:20.791747   22066 kubeadm.go:322] W0331 18:27:13.154169    8961 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0331 11:27:20.791894   22066 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0331 11:27:20.792028   22066 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:27:20.792038   22066 cni.go:84] Creating CNI manager for ""
	I0331 11:27:20.792050   22066 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:27:20.813918   22066 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0331 11:27:20.888096   22066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0331 11:27:20.897449   22066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0331 11:27:20.911044   22066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0331 11:27:20.911116   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:20.911126   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=945b3fc45ee9ac8e1ceaffb00a71ec22c717b10e minikube.k8s.io/name=embed-certs-877000 minikube.k8s.io/updated_at=2023_03_31T11_27_20_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:21.004216   22066 ops.go:34] apiserver oom_adj: -16
	I0331 11:27:21.004335   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:21.570641   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:22.068531   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:22.570643   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:23.070537   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:23.569077   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:24.068569   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:24.569809   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:25.068350   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:25.569042   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:26.069696   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:26.568407   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:27.069923   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:27.570258   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:28.069227   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:28.568861   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:29.068289   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:29.568196   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:30.069618   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:30.569980   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:31.068144   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:31.570079   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:32.069015   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:32.569182   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:33.070118   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:33.568167   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:34.068602   22066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0331 11:27:34.184108   22066 kubeadm.go:1073] duration metric: took 13.273717317s to wait for elevateKubeSystemPrivileges.
	I0331 11:27:34.184123   22066 kubeadm.go:403] StartCluster complete in 4m49.935535273s
	I0331 11:27:34.184140   22066 settings.go:142] acquiring lock: {Name:mk3cb9e1bd7c44f22a996c12a2b2b34c5bbc4ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:27:34.184239   22066 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:27:34.185026   22066 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/kubeconfig: {Name:mkc0b1389479e511140b6b42bee4e1f98dfd2b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:27:34.185328   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0331 11:27:34.185355   22066 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0331 11:27:34.185420   22066 addons.go:66] Setting storage-provisioner=true in profile "embed-certs-877000"
	I0331 11:27:34.185431   22066 addons.go:66] Setting dashboard=true in profile "embed-certs-877000"
	I0331 11:27:34.185438   22066 addons.go:228] Setting addon storage-provisioner=true in "embed-certs-877000"
	W0331 11:27:34.185446   22066 addons.go:237] addon storage-provisioner should already be in state true
	I0331 11:27:34.185443   22066 addons.go:228] Setting addon dashboard=true in "embed-certs-877000"
	I0331 11:27:34.185489   22066 host.go:66] Checking if "embed-certs-877000" exists ...
	I0331 11:27:34.185422   22066 addons.go:66] Setting default-storageclass=true in profile "embed-certs-877000"
	I0331 11:27:34.185502   22066 addons.go:66] Setting metrics-server=true in profile "embed-certs-877000"
	I0331 11:27:34.185517   22066 config.go:182] Loaded profile config "embed-certs-877000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 11:27:34.185527   22066 addons.go:228] Setting addon metrics-server=true in "embed-certs-877000"
	W0331 11:27:34.185535   22066 addons.go:237] addon metrics-server should already be in state true
	W0331 11:27:34.185488   22066 addons.go:237] addon dashboard should already be in state true
	I0331 11:27:34.185580   22066 host.go:66] Checking if "embed-certs-877000" exists ...
	I0331 11:27:34.185587   22066 host.go:66] Checking if "embed-certs-877000" exists ...
	I0331 11:27:34.185583   22066 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-877000"
	I0331 11:27:34.185933   22066 cli_runner.go:164] Run: docker container inspect embed-certs-877000 --format={{.State.Status}}
	I0331 11:27:34.186002   22066 cli_runner.go:164] Run: docker container inspect embed-certs-877000 --format={{.State.Status}}
	I0331 11:27:34.186082   22066 cli_runner.go:164] Run: docker container inspect embed-certs-877000 --format={{.State.Status}}
	I0331 11:27:34.186126   22066 cli_runner.go:164] Run: docker container inspect embed-certs-877000 --format={{.State.Status}}
	I0331 11:27:34.282877   22066 addons.go:228] Setting addon default-storageclass=true in "embed-certs-877000"
	W0331 11:27:34.301155   22066 addons.go:237] addon default-storageclass should already be in state true
	I0331 11:27:34.301186   22066 host.go:66] Checking if "embed-certs-877000" exists ...
	I0331 11:27:34.301115   22066 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:27:34.301877   22066 cli_runner.go:164] Run: docker container inspect embed-certs-877000 --format={{.State.Status}}
	I0331 11:27:34.338398   22066 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0331 11:27:34.380900   22066 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0331 11:27:34.360056   22066 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0331 11:27:34.380977   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0331 11:27:34.418009   22066 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0331 11:27:34.438898   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0331 11:27:34.439028   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:27:34.460075   22066 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0331 11:27:34.439048   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:27:34.498135   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0331 11:27:34.498168   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0331 11:27:34.498335   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:27:34.507114   22066 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0331 11:27:34.507140   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0331 11:27:34.507251   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:27:34.518215   22066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0331 11:27:34.543900   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	I0331 11:27:34.543911   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	I0331 11:27:34.583595   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	I0331 11:27:34.590478   22066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53724 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/embed-certs-877000/id_rsa Username:docker}
	I0331 11:27:34.717518   22066 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-877000" context rescaled to 1 replicas
	I0331 11:27:34.717551   22066 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 11:27:34.742011   22066 out.go:177] * Verifying Kubernetes components...
	I0331 11:27:34.782927   22066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:27:34.800438   22066 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0331 11:27:34.800458   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0331 11:27:34.802574   22066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0331 11:27:34.813085   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0331 11:27:34.813103   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0331 11:27:34.882640   22066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0331 11:27:34.895370   22066 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0331 11:27:34.895399   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0331 11:27:34.895408   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0331 11:27:34.895403   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0331 11:27:34.990092   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0331 11:27:34.990109   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0331 11:27:34.991947   22066 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0331 11:27:34.991961   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0331 11:27:35.010901   22066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0331 11:27:35.094010   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0331 11:27:35.094034   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0331 11:27:35.198947   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0331 11:27:35.198966   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0331 11:27:35.302547   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0331 11:27:35.302571   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0331 11:27:35.386932   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0331 11:27:35.386954   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0331 11:27:35.408679   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0331 11:27:35.408693   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0331 11:27:35.480430   22066 addons.go:420] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0331 11:27:35.480446   22066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0331 11:27:35.498154   22066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0331 11:27:36.107534   22066 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.589363054s)
	I0331 11:27:36.107563   22066 start.go:916] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0331 11:27:36.107577   22066 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.324693318s)
	I0331 11:27:36.107691   22066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-877000
	I0331 11:27:36.210381   22066 node_ready.go:35] waiting up to 6m0s for node "embed-certs-877000" to be "Ready" ...
	I0331 11:27:36.221042   22066 node_ready.go:49] node "embed-certs-877000" has status "Ready":"True"
	I0331 11:27:36.221055   22066 node_ready.go:38] duration metric: took 10.634882ms waiting for node "embed-certs-877000" to be "Ready" ...
	I0331 11:27:36.221061   22066 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 11:27:36.229668   22066 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-267cx" in "kube-system" namespace to be "Ready" ...
	I0331 11:27:36.277130   22066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.474603606s)
	I0331 11:27:36.277140   22066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.394551611s)
	I0331 11:27:36.290749   22066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.279879779s)
	I0331 11:27:36.290778   22066 addons.go:464] Verifying addon metrics-server=true in "embed-certs-877000"
	I0331 11:27:36.501735   22066 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.00360121s)
	I0331 11:27:36.523722   22066 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-877000 addons enable metrics-server	
	
	
	I0331 11:27:36.598854   22066 out.go:177] * Enabled addons: default-storageclass, metrics-server, dashboard
	I0331 11:27:36.641741   22066 addons.go:499] enable addons completed in 2.456514385s: enabled=[default-storageclass metrics-server dashboard]
	I0331 11:27:38.247301   22066 pod_ready.go:102] pod "coredns-787d4945fb-267cx" in "kube-system" namespace has status "Ready":"False"
	I0331 11:27:37.546901   21423 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0331 11:27:37.547043   21423 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0331 11:27:37.547055   21423 kubeadm.go:322] 
	I0331 11:27:37.547094   21423 kubeadm.go:322] Unfortunately, an error has occurred:
	I0331 11:27:37.547123   21423 kubeadm.go:322] 	timed out waiting for the condition
	I0331 11:27:37.547127   21423 kubeadm.go:322] 
	I0331 11:27:37.547160   21423 kubeadm.go:322] This error is likely caused by:
	I0331 11:27:37.547183   21423 kubeadm.go:322] 	- The kubelet is not running
	I0331 11:27:37.547260   21423 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0331 11:27:37.547270   21423 kubeadm.go:322] 
	I0331 11:27:37.547385   21423 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0331 11:27:37.547423   21423 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0331 11:27:37.547449   21423 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0331 11:27:37.547453   21423 kubeadm.go:322] 
	I0331 11:27:37.547530   21423 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0331 11:27:37.547602   21423 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0331 11:27:37.547676   21423 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0331 11:27:37.547721   21423 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0331 11:27:37.547786   21423 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0331 11:27:37.547812   21423 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0331 11:27:37.550772   21423 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0331 11:27:37.550830   21423 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0331 11:27:37.550912   21423 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
	I0331 11:27:37.551023   21423 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0331 11:27:37.551103   21423 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0331 11:27:37.551161   21423 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0331 11:27:37.551187   21423 kubeadm.go:403] StartCluster complete in 8m4.201191772s
	I0331 11:27:37.551286   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0331 11:27:37.570650   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.570663   21423 logs.go:279] No container was found matching "kube-apiserver"
	I0331 11:27:37.570734   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0331 11:27:37.589714   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.589727   21423 logs.go:279] No container was found matching "etcd"
	I0331 11:27:37.589792   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0331 11:27:37.610984   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.610998   21423 logs.go:279] No container was found matching "coredns"
	I0331 11:27:37.611068   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0331 11:27:37.633571   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.633584   21423 logs.go:279] No container was found matching "kube-scheduler"
	I0331 11:27:37.633657   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0331 11:27:37.654128   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.654143   21423 logs.go:279] No container was found matching "kube-proxy"
	I0331 11:27:37.654221   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0331 11:27:37.675062   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.675075   21423 logs.go:279] No container was found matching "kube-controller-manager"
	I0331 11:27:37.675141   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0331 11:27:37.694415   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.694429   21423 logs.go:279] No container was found matching "kindnet"
	I0331 11:27:37.694498   21423 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0331 11:27:37.716996   21423 logs.go:277] 0 containers: []
	W0331 11:27:37.717013   21423 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0331 11:27:37.717021   21423 logs.go:123] Gathering logs for kubelet ...
	I0331 11:27:37.717029   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0331 11:27:37.762318   21423 logs.go:123] Gathering logs for dmesg ...
	I0331 11:27:37.762351   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0331 11:27:37.778195   21423 logs.go:123] Gathering logs for describe nodes ...
	I0331 11:27:37.778211   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0331 11:27:37.839666   21423 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0331 11:27:37.839681   21423 logs.go:123] Gathering logs for Docker ...
	I0331 11:27:37.839688   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0331 11:27:37.866077   21423 logs.go:123] Gathering logs for container status ...
	I0331 11:27:37.866096   21423 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0331 11:27:39.918640   21423 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052628923s)
	W0331 11:27:39.918781   21423 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0331 11:27:39.918806   21423 out.go:239] * 
	W0331 11:27:39.918945   21423 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 11:27:39.918981   21423 out.go:239] * 
	W0331 11:27:39.919924   21423 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0331 11:27:40.013221   21423 out.go:177] 
	W0331 11:27:40.087443   21423 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.2. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0331 11:27:40.087525   21423 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0331 11:27:40.087581   21423 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0331 11:27:40.108091   21423 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-03-31 18:19:17 UTC, end at Fri 2023-03-31 18:27:41 UTC. --
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.224647123Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225192443Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225275873Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225945996Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226028503Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226077622Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226092567Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226120653Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226137969Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226158434Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226172127Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226220646Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226362523Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226626084Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226676606Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.227155872Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.558285544Z" level=info msg="Loading containers: start."
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.640881276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.675071947Z" level=info msg="Loading containers: done."
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.683436290Z" level=info msg="Docker daemon" commit=219f21b graphdriver=overlay2 version=23.0.2
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.683497721Z" level=info msg="Daemon has completed initialization"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.704168089Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 systemd[1]: Started Docker Application Container Engine.
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.707476643Z" level=info msg="API listen on [::]:2376"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.715129950Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-03-31T18:27:43Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:27:44 up  1:26,  0 users,  load average: 2.38, 1.11, 1.36
	Linux old-k8s-version-221000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-03-31 18:19:17 UTC, end at Fri 2023-03-31 18:27:44 UTC. --
	Mar 31 18:27:42 old-k8s-version-221000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 31 18:27:42 old-k8s-version-221000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Mar 31 18:27:42 old-k8s-version-221000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 31 18:27:42 old-k8s-version-221000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14282]: I0331 18:27:43.035588   14282 server.go:410] Version: v1.16.0
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14282]: I0331 18:27:43.035864   14282 plugins.go:100] No cloud provider specified.
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14282]: I0331 18:27:43.035876   14282 server.go:773] Client rotation is on, will bootstrap in background
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14282]: I0331 18:27:43.037693   14282 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14282]: W0331 18:27:43.038482   14282 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14282]: W0331 18:27:43.038549   14282 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14282]: F0331 18:27:43.038589   14282 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 31 18:27:43 old-k8s-version-221000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 31 18:27:43 old-k8s-version-221000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 31 18:27:43 old-k8s-version-221000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Mar 31 18:27:43 old-k8s-version-221000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 31 18:27:43 old-k8s-version-221000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14295]: I0331 18:27:43.802066   14295 server.go:410] Version: v1.16.0
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14295]: I0331 18:27:43.802274   14295 plugins.go:100] No cloud provider specified.
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14295]: I0331 18:27:43.802285   14295 server.go:773] Client rotation is on, will bootstrap in background
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14295]: I0331 18:27:43.804169   14295 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14295]: W0331 18:27:43.804916   14295 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14295]: W0331 18:27:43.804989   14295 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 31 18:27:43 old-k8s-version-221000 kubelet[14295]: F0331 18:27:43.805013   14295 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 31 18:27:43 old-k8s-version-221000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 31 18:27:43 old-k8s-version-221000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:27:43.945707   22415 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 2 (423.566068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-221000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0331 11:27:52.624766    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:28:00.451371    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:28:16.334134    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:28:27.092378    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:28:44.900238    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:29:08.704069    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:29:23.492638    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:29:30.500712    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:29:41.848722    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:30:31.749480    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
E0331 11:30:32.473643    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:30:35.920763    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:31:00.166564    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:31:04.885182    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:31:08.297207    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:31:19.605272    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:31:29.619686    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:31:59.021217    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:32:29.806429    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:32:31.344367    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:33:00.488014    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:33:44.938948    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:33:52.851882    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:34:08.744916    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:34:22.648266    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:34:30.541365    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:34:41.887998    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:34:50.173780    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:35:07.986297    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:35:32.514386    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:35:35.959905    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:36:08.284618    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:36:19.591826    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:36:29.605117    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:37:09.494402    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 2 (406.849889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-221000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-221000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-221000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c",
	        "Created": "2023-03-31T18:13:14.794492262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:19:17.579830346Z",
	            "FinishedAt": "2023-03-31T18:19:14.577555049Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hosts",
	        "LogPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c-json.log",
	        "Name": "/old-k8s-version-221000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-221000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-221000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-221000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-221000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-221000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0456e7b7510bb75cc0d831a39cb0499c70c9c7a3e36cf7af9c3693387f85c05",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53597"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53598"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53600"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53601"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a0456e7b7510",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-221000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0bb0a05e1404",
	                        "old-k8s-version-221000"
	                    ],
	                    "NetworkID": "1369008204ce2a861d531490c08c0f4f11e7797b90e56bf4d65905b433bee06b",
	                    "EndpointID": "298def5630fe6d14ed76667224bda0c3f5879d4b90bc4725c120d066e1d67a98",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 2 (414.801134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-221000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-221000 logs -n 25: (3.416430426s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-221000        | old-k8s-version-221000       | jenkins | v1.29.0 | 31 Mar 23 11:17 PDT |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-221000                              | old-k8s-version-221000       | jenkins | v1.29.0 | 31 Mar 23 11:19 PDT | 31 Mar 23 11:19 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-221000             | old-k8s-version-221000       | jenkins | v1.29.0 | 31 Mar 23 11:19 PDT | 31 Mar 23 11:19 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-221000                              | old-k8s-version-221000       | jenkins | v1.29.0 | 31 Mar 23 11:19 PDT |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-374000 sudo                              | no-preload-374000            | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-374000                                   | no-preload-374000            | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-374000                                   | no-preload-374000            | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-374000                                   | no-preload-374000            | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	| delete  | -p no-preload-374000                                   | no-preload-374000            | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:21 PDT |
	| start   | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:21 PDT | 31 Mar 23 11:22 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-877000            | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:22 PDT | 31 Mar 23 11:22 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:22 PDT | 31 Mar 23 11:22 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-877000                 | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:22 PDT | 31 Mar 23 11:22 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:22 PDT | 31 Mar 23 11:31 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-877000 sudo                             | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	| delete  | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	| delete  | -p                                                     | disable-driver-mounts-563000 | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	|         | disable-driver-mounts-563000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:33 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-594000  | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:33 PDT | 31 Mar 23 11:33 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:33 PDT | 31 Mar 23 11:33 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-594000       | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:33 PDT | 31 Mar 23 11:33 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:33 PDT |                     |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 11:33:27
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 11:33:27.109294   23040 out.go:296] Setting OutFile to fd 1 ...
	I0331 11:33:27.109556   23040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:33:27.109562   23040 out.go:309] Setting ErrFile to fd 2...
	I0331 11:33:27.109566   23040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:33:27.109678   23040 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 11:33:27.110984   23040 out.go:303] Setting JSON to false
	I0331 11:33:27.131348   23040 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5575,"bootTime":1680282032,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 11:33:27.131432   23040 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 11:33:27.153252   23040 out.go:177] * [default-k8s-diff-port-594000] minikube v1.29.0 on Darwin 13.3
	I0331 11:33:27.175224   23040 notify.go:220] Checking for updates...
	I0331 11:33:27.197231   23040 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 11:33:27.219203   23040 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:33:27.239984   23040 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 11:33:27.261058   23040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 11:33:27.282059   23040 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 11:33:27.302871   23040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 11:33:27.324424   23040 config.go:182] Loaded profile config "default-k8s-diff-port-594000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 11:33:27.324760   23040 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 11:33:27.388466   23040 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 11:33:27.388599   23040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:33:27.576379   23040 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-31 18:33:27.442001809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:33:27.598293   23040 out.go:177] * Using the docker driver based on existing profile
	I0331 11:33:27.619859   23040 start.go:295] selected driver: docker
	I0331 11:33:27.619887   23040 start.go:859] validating driver "docker" against &{Name:default-k8s-diff-port-594000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:default-k8s-diff-port-594000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:33:27.620017   23040 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 11:33:27.624470   23040 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:33:27.811413   23040 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-31 18:33:27.679419088 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:33:27.811553   23040 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0331 11:33:27.811572   23040 cni.go:84] Creating CNI manager for ""
	I0331 11:33:27.811600   23040 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:33:27.811614   23040 start_flags.go:319] config:
	{Name:default-k8s-diff-port-594000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:default-k8s-diff-port-594000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:33:27.834583   23040 out.go:177] * Starting control plane node default-k8s-diff-port-594000 in cluster default-k8s-diff-port-594000
	I0331 11:33:27.856127   23040 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 11:33:27.877329   23040 out.go:177] * Pulling base image ...
	I0331 11:33:27.920168   23040 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 11:33:27.920162   23040 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 11:33:27.920268   23040 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0331 11:33:27.920342   23040 cache.go:57] Caching tarball of preloaded images
	I0331 11:33:27.920578   23040 preload.go:174] Found /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0331 11:33:27.920600   23040 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.3 on docker
	I0331 11:33:27.921550   23040 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/config.json ...
	I0331 11:33:28.015441   23040 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon, skipping pull
	I0331 11:33:28.015464   23040 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in daemon, skipping load
	I0331 11:33:28.015491   23040 cache.go:193] Successfully downloaded all kic artifacts
	I0331 11:33:28.015537   23040 start.go:364] acquiring machines lock for default-k8s-diff-port-594000: {Name:mk98572be00e11e1e8b81f02f6e4a273f9c4f731 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 11:33:28.015627   23040 start.go:368] acquired machines lock for "default-k8s-diff-port-594000" in 70.727µs
	I0331 11:33:28.015653   23040 start.go:96] Skipping create...Using existing machine configuration
	I0331 11:33:28.015661   23040 fix.go:55] fixHost starting: 
	I0331 11:33:28.015893   23040 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-594000 --format={{.State.Status}}
	I0331 11:33:28.076284   23040 fix.go:103] recreateIfNeeded on default-k8s-diff-port-594000: state=Stopped err=<nil>
	W0331 11:33:28.076314   23040 fix.go:129] unexpected machine state, will restart: <nil>
	I0331 11:33:28.098064   23040 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-594000" ...
	I0331 11:33:28.118970   23040 cli_runner.go:164] Run: docker start default-k8s-diff-port-594000
	I0331 11:33:28.457079   23040 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-594000 --format={{.State.Status}}
	I0331 11:33:28.522357   23040 kic.go:426] container "default-k8s-diff-port-594000" state is running.
	I0331 11:33:28.522933   23040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-594000
	I0331 11:33:28.587208   23040 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/config.json ...
	I0331 11:33:28.587614   23040 machine.go:88] provisioning docker machine ...
	I0331 11:33:28.587637   23040 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-594000"
	I0331 11:33:28.587704   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:28.653643   23040 main.go:141] libmachine: Using SSH client type: native
	I0331 11:33:28.654055   23040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54199 <nil> <nil>}
	I0331 11:33:28.654069   23040 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-594000 && echo "default-k8s-diff-port-594000" | sudo tee /etc/hostname
	I0331 11:33:28.812780   23040 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-594000
	
	I0331 11:33:28.812870   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:28.874494   23040 main.go:141] libmachine: Using SSH client type: native
	I0331 11:33:28.874859   23040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54199 <nil> <nil>}
	I0331 11:33:28.874877   23040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-594000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-594000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-594000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 11:33:29.009107   23040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:33:29.009131   23040 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16144-2324/.minikube CaCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16144-2324/.minikube}
	I0331 11:33:29.009159   23040 ubuntu.go:177] setting up certificates
	I0331 11:33:29.009167   23040 provision.go:83] configureAuth start
	I0331 11:33:29.009241   23040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-594000
	I0331 11:33:29.069522   23040 provision.go:138] copyHostCerts
	I0331 11:33:29.069616   23040 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem, removing ...
	I0331 11:33:29.069631   23040 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 11:33:29.069735   23040 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem (1078 bytes)
	I0331 11:33:29.069943   23040 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem, removing ...
	I0331 11:33:29.069949   23040 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 11:33:29.070009   23040 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem (1123 bytes)
	I0331 11:33:29.070158   23040 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem, removing ...
	I0331 11:33:29.070163   23040 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 11:33:29.070224   23040 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem (1679 bytes)
	I0331 11:33:29.070349   23040 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-594000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-594000]
	I0331 11:33:29.146266   23040 provision.go:172] copyRemoteCerts
	I0331 11:33:29.146329   23040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 11:33:29.146383   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:29.207205   23040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54199 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/default-k8s-diff-port-594000/id_rsa Username:docker}
	I0331 11:33:29.302368   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 11:33:29.319706   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0331 11:33:29.336792   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0331 11:33:29.353747   23040 provision.go:86] duration metric: configureAuth took 344.574031ms
	I0331 11:33:29.353763   23040 ubuntu.go:193] setting minikube options for container-runtime
	I0331 11:33:29.353912   23040 config.go:182] Loaded profile config "default-k8s-diff-port-594000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 11:33:29.353976   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:29.413858   23040 main.go:141] libmachine: Using SSH client type: native
	I0331 11:33:29.414201   23040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54199 <nil> <nil>}
	I0331 11:33:29.414210   23040 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 11:33:29.547167   23040 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0331 11:33:29.547179   23040 ubuntu.go:71] root file system type: overlay
	I0331 11:33:29.547267   23040 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 11:33:29.547349   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:29.609262   23040 main.go:141] libmachine: Using SSH client type: native
	I0331 11:33:29.609613   23040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54199 <nil> <nil>}
	I0331 11:33:29.609666   23040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 11:33:29.752160   23040 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 11:33:29.752259   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:29.812723   23040 main.go:141] libmachine: Using SSH client type: native
	I0331 11:33:29.813081   23040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54199 <nil> <nil>}
	I0331 11:33:29.813094   23040 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 11:33:29.952446   23040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:33:29.952465   23040 machine.go:91] provisioned docker machine in 1.364900518s
	I0331 11:33:29.952477   23040 start.go:300] post-start starting for "default-k8s-diff-port-594000" (driver="docker")
	I0331 11:33:29.952482   23040 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 11:33:29.952558   23040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 11:33:29.952614   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:30.013789   23040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54199 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/default-k8s-diff-port-594000/id_rsa Username:docker}
	I0331 11:33:30.108162   23040 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 11:33:30.111680   23040 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0331 11:33:30.111695   23040 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0331 11:33:30.111709   23040 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0331 11:33:30.111716   23040 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0331 11:33:30.111725   23040 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/addons for local assets ...
	I0331 11:33:30.111811   23040 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/files for local assets ...
	I0331 11:33:30.111974   23040 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> 28002.pem in /etc/ssl/certs
	I0331 11:33:30.112146   23040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 11:33:30.119316   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:33:30.136632   23040 start.go:303] post-start completed in 184.15363ms
	I0331 11:33:30.136706   23040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 11:33:30.136768   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:30.197657   23040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54199 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/default-k8s-diff-port-594000/id_rsa Username:docker}
	I0331 11:33:30.289283   23040 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0331 11:33:30.294176   23040 fix.go:57] fixHost completed within 2.278607011s
	I0331 11:33:30.294193   23040 start.go:83] releasing machines lock for "default-k8s-diff-port-594000", held for 2.278655129s
	I0331 11:33:30.294302   23040 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-594000
	I0331 11:33:30.354999   23040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 11:33:30.355000   23040 ssh_runner.go:195] Run: cat /version.json
	I0331 11:33:30.355093   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:30.355096   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:30.417919   23040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54199 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/default-k8s-diff-port-594000/id_rsa Username:docker}
	I0331 11:33:30.417937   23040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54199 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/default-k8s-diff-port-594000/id_rsa Username:docker}
	W0331 11:33:30.562170   23040 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	I0331 11:33:30.562253   23040 ssh_runner.go:195] Run: systemctl --version
	I0331 11:33:30.567392   23040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0331 11:33:30.572435   23040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0331 11:33:30.587965   23040 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0331 11:33:30.588050   23040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0331 11:33:30.595852   23040 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0331 11:33:30.595868   23040 start.go:481] detecting cgroup driver to use...
	I0331 11:33:30.595881   23040 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:33:30.595951   23040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:33:30.608756   23040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0331 11:33:30.617257   23040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 11:33:30.625964   23040 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 11:33:30.626016   23040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 11:33:30.634468   23040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:33:30.642918   23040 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 11:33:30.651341   23040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:33:30.659805   23040 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 11:33:30.667912   23040 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 11:33:30.676486   23040 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 11:33:30.683651   23040 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 11:33:30.690656   23040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:33:30.755640   23040 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 11:33:30.831614   23040 start.go:481] detecting cgroup driver to use...
	I0331 11:33:30.831636   23040 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:33:30.831714   23040 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 11:33:30.842040   23040 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0331 11:33:30.842107   23040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 11:33:30.852390   23040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:33:30.868196   23040 ssh_runner.go:195] Run: which cri-dockerd
	I0331 11:33:30.872299   23040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 11:33:30.880326   23040 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 11:33:30.919933   23040 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 11:33:31.023872   23040 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 11:33:31.087525   23040 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 11:33:31.087542   23040 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 11:33:31.118764   23040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:33:31.190286   23040 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:33:31.444285   23040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 11:33:31.513434   23040 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0331 11:33:31.581523   23040 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 11:33:31.652810   23040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:33:31.718488   23040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0331 11:33:31.739250   23040 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:33:31.814782   23040 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0331 11:33:31.893967   23040 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0331 11:33:31.894078   23040 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0331 11:33:31.898670   23040 start.go:549] Will wait 60s for crictl version
	I0331 11:33:31.898735   23040 ssh_runner.go:195] Run: which crictl
	I0331 11:33:31.902617   23040 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0331 11:33:31.933385   23040 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.2
	RuntimeApiVersion:  v1alpha2
	I0331 11:33:31.933466   23040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:33:31.958724   23040 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:33:32.005034   23040 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 23.0.2 ...
	I0331 11:33:32.005208   23040 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-594000 dig +short host.docker.internal
	I0331 11:33:32.137747   23040 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0331 11:33:32.137883   23040 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0331 11:33:32.142606   23040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:33:32.152651   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:32.213293   23040 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 11:33:32.213381   23040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:33:32.234155   23040 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0331 11:33:32.234171   23040 docker.go:569] Images already preloaded, skipping extraction
	I0331 11:33:32.234241   23040 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:33:32.254884   23040 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0331 11:33:32.254906   23040 cache_images.go:84] Images are preloaded, skipping loading
	I0331 11:33:32.254992   23040 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 11:33:32.281848   23040 cni.go:84] Creating CNI manager for ""
	I0331 11:33:32.281868   23040 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:33:32.281892   23040 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0331 11:33:32.281907   23040 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-594000 NodeName:default-k8s-diff-port-594000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 11:33:32.282035   23040 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-594000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 11:33:32.282110   23040 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-594000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:default-k8s-diff-port-594000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0331 11:33:32.282176   23040 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0331 11:33:32.290245   23040 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 11:33:32.290307   23040 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 11:33:32.297748   23040 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0331 11:33:32.310694   23040 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0331 11:33:32.323504   23040 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0331 11:33:32.336396   23040 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0331 11:33:32.340106   23040 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:33:32.350093   23040 certs.go:56] Setting up /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000 for IP: 192.168.67.2
	I0331 11:33:32.350111   23040 certs.go:186] acquiring lock for shared ca certs: {Name:mk1ddc355573fb6044e73c93dd0e9bf4bae32052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:33:32.350287   23040 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key
	I0331 11:33:32.350349   23040 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key
	I0331 11:33:32.350459   23040 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.key
	I0331 11:33:32.350523   23040 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/apiserver.key.c7fa3a9e
	I0331 11:33:32.350577   23040 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/proxy-client.key
	I0331 11:33:32.350782   23040 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem (1338 bytes)
	W0331 11:33:32.350817   23040 certs.go:397] ignoring /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800_empty.pem, impossibly tiny 0 bytes
	I0331 11:33:32.350832   23040 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem (1679 bytes)
	I0331 11:33:32.350869   23040 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem (1078 bytes)
	I0331 11:33:32.350899   23040 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem (1123 bytes)
	I0331 11:33:32.350928   23040 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem (1679 bytes)
	I0331 11:33:32.350996   23040 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:33:32.351580   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 11:33:32.368933   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0331 11:33:32.387711   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 11:33:32.405248   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0331 11:33:32.422612   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 11:33:32.439934   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0331 11:33:32.457261   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 11:33:32.475008   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 11:33:32.492814   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /usr/share/ca-certificates/28002.pem (1708 bytes)
	I0331 11:33:32.510989   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 11:33:32.529575   23040 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem --> /usr/share/ca-certificates/2800.pem (1338 bytes)
	I0331 11:33:32.547861   23040 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 11:33:32.561361   23040 ssh_runner.go:195] Run: openssl version
	I0331 11:33:32.567679   23040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28002.pem && ln -fs /usr/share/ca-certificates/28002.pem /etc/ssl/certs/28002.pem"
	I0331 11:33:32.576421   23040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28002.pem
	I0331 11:33:32.580450   23040 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/28002.pem
	I0331 11:33:32.580495   23040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28002.pem
	I0331 11:33:32.586143   23040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28002.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 11:33:32.593949   23040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 11:33:32.601943   23040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:33:32.606009   23040 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:33:32.606060   23040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:33:32.611555   23040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 11:33:32.619079   23040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2800.pem && ln -fs /usr/share/ca-certificates/2800.pem /etc/ssl/certs/2800.pem"
	I0331 11:33:32.627219   23040 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2800.pem
	I0331 11:33:32.631275   23040 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/2800.pem
	I0331 11:33:32.631328   23040 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2800.pem
	I0331 11:33:32.637315   23040 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2800.pem /etc/ssl/certs/51391683.0"
	I0331 11:33:32.645365   23040 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-594000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:default-k8s-diff-port-594000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:33:32.645478   23040 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:33:32.664272   23040 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 11:33:32.672445   23040 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0331 11:33:32.672461   23040 kubeadm.go:633] restartCluster start
	I0331 11:33:32.672511   23040 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0331 11:33:32.679511   23040 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:32.679588   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-594000
	I0331 11:33:32.740750   23040 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-594000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:33:32.740916   23040 kubeconfig.go:146] "default-k8s-diff-port-594000" context is missing from /Users/jenkins/minikube-integration/16144-2324/kubeconfig - will repair!
	I0331 11:33:32.741232   23040 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/kubeconfig: {Name:mkc0b1389479e511140b6b42bee4e1f98dfd2b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:33:32.742806   23040 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0331 11:33:32.750829   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:32.750893   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:32.759573   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:33.259693   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:33.260218   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:33.270573   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:33.760551   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:33.760704   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:33.772145   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:34.261708   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:34.261879   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:34.273103   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:34.759695   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:34.759764   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:34.769528   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:35.261606   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:35.261780   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:35.273135   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:35.759962   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:35.760063   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:35.769771   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:36.259601   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:36.259668   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:36.269250   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:36.760111   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:36.760287   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:36.771614   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:37.260810   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:37.260979   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:37.272044   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:37.759821   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:37.759897   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:37.769249   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:38.260247   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:38.260402   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:38.271714   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:38.760721   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:38.760860   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:38.772360   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:39.259857   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:39.259924   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:39.269813   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:39.760969   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:39.761125   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:39.772519   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:40.261003   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:40.261140   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:40.271440   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:40.760008   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:40.760097   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:40.769617   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:41.259384   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:41.259466   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:41.269112   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:41.759917   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:41.760052   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:41.770893   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:42.259326   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:42.259395   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:42.268954   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:42.759503   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:42.759677   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:42.770973   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:42.770984   23040 api_server.go:165] Checking apiserver status ...
	I0331 11:33:42.771035   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:33:42.779737   23040 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:42.779750   23040 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0331 11:33:42.779758   23040 kubeadm.go:1120] stopping kube-system containers ...
	I0331 11:33:42.779839   23040 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:33:42.800645   23040 docker.go:465] Stopping containers: [2e0d5be10697 cd0a26881ac5 de39e9998b50 85e986a3f90c 9bff1b0ab6fb cfe3cdbe6566 488416f3e0a5 02b97bf42eb9 12f46386b650 45c4d4057343 e52e67ba4d49 7d1cc160b86b 347aa7005690 8233f0b56cc8 1945a4e6f9f5 55d4cd163256]
	I0331 11:33:42.800729   23040 ssh_runner.go:195] Run: docker stop 2e0d5be10697 cd0a26881ac5 de39e9998b50 85e986a3f90c 9bff1b0ab6fb cfe3cdbe6566 488416f3e0a5 02b97bf42eb9 12f46386b650 45c4d4057343 e52e67ba4d49 7d1cc160b86b 347aa7005690 8233f0b56cc8 1945a4e6f9f5 55d4cd163256
	I0331 11:33:42.824402   23040 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0331 11:33:42.835071   23040 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:33:42.842758   23040 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 31 18:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 31 18:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Mar 31 18:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 31 18:32 /etc/kubernetes/scheduler.conf
	
	I0331 11:33:42.842818   23040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0331 11:33:42.850385   23040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0331 11:33:42.857967   23040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0331 11:33:42.865449   23040 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:42.865504   23040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0331 11:33:42.872891   23040 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0331 11:33:42.880327   23040 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:33:42.880375   23040 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0331 11:33:42.887523   23040 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:33:42.895030   23040 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0331 11:33:42.895044   23040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:33:42.949442   23040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:33:43.463335   23040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:33:43.597142   23040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:33:43.655859   23040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:33:43.748669   23040 api_server.go:51] waiting for apiserver process to appear ...
	I0331 11:33:43.748738   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:33:44.260941   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:33:44.760358   23040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:33:44.822628   23040 api_server.go:71] duration metric: took 1.07400259s to wait for apiserver process to appear ...
	I0331 11:33:44.822652   23040 api_server.go:87] waiting for apiserver healthz status ...
	I0331 11:33:44.822672   23040 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54203/healthz ...
	I0331 11:33:44.823940   23040 api_server.go:268] stopped: https://127.0.0.1:54203/healthz: Get "https://127.0.0.1:54203/healthz": EOF
	I0331 11:33:45.324858   23040 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54203/healthz ...
	I0331 11:33:47.085521   23040 api_server.go:278] https://127.0.0.1:54203/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0331 11:33:47.085540   23040 api_server.go:102] status: https://127.0.0.1:54203/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0331 11:33:47.325507   23040 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54203/healthz ...
	I0331 11:33:47.332750   23040 api_server.go:278] https://127.0.0.1:54203/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0331 11:33:47.332766   23040 api_server.go:102] status: https://127.0.0.1:54203/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0331 11:33:47.825559   23040 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54203/healthz ...
	I0331 11:33:47.832650   23040 api_server.go:278] https://127.0.0.1:54203/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0331 11:33:47.832663   23040 api_server.go:102] status: https://127.0.0.1:54203/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0331 11:33:48.323889   23040 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54203/healthz ...
	I0331 11:33:48.329729   23040 api_server.go:278] https://127.0.0.1:54203/healthz returned 200:
	ok
	I0331 11:33:48.336731   23040 api_server.go:140] control plane version: v1.26.3
	I0331 11:33:48.336744   23040 api_server.go:130] duration metric: took 3.514233218s to wait for apiserver health ...
	I0331 11:33:48.336757   23040 cni.go:84] Creating CNI manager for ""
	I0331 11:33:48.336768   23040 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:33:48.374448   23040 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0331 11:33:48.412189   23040 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0331 11:33:48.421719   23040 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0331 11:33:48.434980   23040 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 11:33:48.442566   23040 system_pods.go:59] 8 kube-system pods found
	I0331 11:33:48.442583   23040 system_pods.go:61] "coredns-787d4945fb-9b4fg" [9c950497-8b71-4178-aeb0-99203b9fdf87] Running
	I0331 11:33:48.442588   23040 system_pods.go:61] "etcd-default-k8s-diff-port-594000" [ee10c750-33ed-4afb-9485-6480a3c615b4] Running
	I0331 11:33:48.442593   23040 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-594000" [ea24a4a1-494b-4bb3-ab45-6a1c4aa0b636] Running
	I0331 11:33:48.442597   23040 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-594000" [fa84c18c-9d24-419f-98e1-8eb13c5f14dc] Running
	I0331 11:33:48.442601   23040 system_pods.go:61] "kube-proxy-995wp" [ad625e5a-233f-4295-8223-8fc581781f6f] Running
	I0331 11:33:48.442605   23040 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-594000" [88558149-2347-43a5-87cb-a56af53a5ffa] Running
	I0331 11:33:48.442612   23040 system_pods.go:61] "metrics-server-cc4f5f75f-dhbcs" [752bd978-655b-453d-8657-aab50b3021e3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0331 11:33:48.442618   23040 system_pods.go:61] "storage-provisioner" [6e5c4444-46fc-4155-8d0f-03162272d605] Running
	I0331 11:33:48.442622   23040 system_pods.go:74] duration metric: took 7.630178ms to wait for pod list to return data ...
	I0331 11:33:48.442628   23040 node_conditions.go:102] verifying NodePressure condition ...
	I0331 11:33:48.445923   23040 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0331 11:33:48.445938   23040 node_conditions.go:123] node cpu capacity is 6
	I0331 11:33:48.445947   23040 node_conditions.go:105] duration metric: took 3.315576ms to run NodePressure ...
	I0331 11:33:48.445959   23040 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:33:48.576543   23040 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0331 11:33:48.580753   23040 kubeadm.go:784] kubelet initialised
	I0331 11:33:48.580766   23040 kubeadm.go:785] duration metric: took 4.209509ms waiting for restarted kubelet to initialise ...
	I0331 11:33:48.580773   23040 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0331 11:33:48.586324   23040 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-9b4fg" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:48.591304   23040 pod_ready.go:92] pod "coredns-787d4945fb-9b4fg" in "kube-system" namespace has status "Ready":"True"
	I0331 11:33:48.591312   23040 pod_ready.go:81] duration metric: took 4.976892ms waiting for pod "coredns-787d4945fb-9b4fg" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:48.591320   23040 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-594000" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:48.595870   23040 pod_ready.go:92] pod "etcd-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"True"
	I0331 11:33:48.595881   23040 pod_ready.go:81] duration metric: took 4.55494ms waiting for pod "etcd-default-k8s-diff-port-594000" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:48.595887   23040 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-594000" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:48.614022   23040 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"True"
	I0331 11:33:48.614035   23040 pod_ready.go:81] duration metric: took 18.137186ms waiting for pod "kube-apiserver-default-k8s-diff-port-594000" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:48.614044   23040 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-594000" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:48.838894   23040 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"True"
	I0331 11:33:48.838906   23040 pod_ready.go:81] duration metric: took 224.864703ms waiting for pod "kube-controller-manager-default-k8s-diff-port-594000" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:48.838914   23040 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-995wp" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:49.239038   23040 pod_ready.go:92] pod "kube-proxy-995wp" in "kube-system" namespace has status "Ready":"True"
	I0331 11:33:49.239050   23040 pod_ready.go:81] duration metric: took 400.149347ms waiting for pod "kube-proxy-995wp" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:49.239057   23040 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace to be "Ready" ...
	I0331 11:33:51.647665   23040 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"False"
	I0331 11:33:54.146704   23040 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"False"
	I0331 11:33:56.643614   23040 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"False"
	I0331 11:33:58.647316   23040 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:00.649399   23040 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:03.144818   23040 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:04.146939   23040 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace has status "Ready":"True"
	I0331 11:34:04.146953   23040 pod_ready.go:81] duration metric: took 14.908525706s waiting for pod "kube-scheduler-default-k8s-diff-port-594000" in "kube-system" namespace to be "Ready" ...
	I0331 11:34:04.146960   23040 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace to be "Ready" ...
	I0331 11:34:06.158710   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:08.159097   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:10.658973   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:12.659287   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:15.158231   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:17.158329   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:19.658457   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:21.658543   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:23.658521   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:26.158192   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:28.656020   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:30.657990   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:32.657952   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:34.658083   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:37.157016   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:39.657760   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:41.658221   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:44.157295   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:46.157274   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:48.657242   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:51.157419   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:53.656703   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:55.657405   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:34:58.157303   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:00.657120   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:03.155288   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:05.155287   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:07.656390   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:10.156399   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:12.156525   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:14.655452   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:16.655494   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:18.656366   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:21.156633   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:23.655429   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:25.655716   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:28.156010   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:30.655339   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:32.655620   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:35.155133   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:37.655124   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:39.655264   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:42.154586   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:44.154805   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:46.654320   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:48.654436   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:51.154536   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:53.653035   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:55.654154   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:35:57.654214   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:00.153760   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:02.653836   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:04.653942   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:06.653968   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:09.153487   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:11.653484   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:14.152673   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:16.153475   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:18.651519   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:21.153590   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:23.652590   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:25.653326   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:28.153225   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:30.652308   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:32.653440   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:35.152516   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:37.153026   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:39.652478   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:42.152212   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:44.651226   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:46.652707   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:49.150470   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:51.151103   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:53.152302   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:55.649868   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:57.651492   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:36:59.651713   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:37:02.151347   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:37:04.650343   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:37:06.651283   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:37:08.651345   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	I0331 11:37:11.149648   23040 pod_ready.go:102] pod "metrics-server-cc4f5f75f-dhbcs" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-03-31 18:19:17 UTC, end at Fri 2023-03-31 18:37:16 UTC. --
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.224647123Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225192443Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225275873Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225945996Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226028503Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226077622Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226092567Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226120653Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226137969Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226158434Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226172127Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226220646Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226362523Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226626084Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226676606Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.227155872Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.558285544Z" level=info msg="Loading containers: start."
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.640881276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.675071947Z" level=info msg="Loading containers: done."
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.683436290Z" level=info msg="Docker daemon" commit=219f21b graphdriver=overlay2 version=23.0.2
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.683497721Z" level=info msg="Daemon has completed initialization"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.704168089Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 systemd[1]: Started Docker Application Container Engine.
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.707476643Z" level=info msg="API listen on [::]:2376"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.715129950Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-03-31T18:37:18Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:37:19 up  1:36,  0 users,  load average: 0.70, 1.19, 1.27
	Linux old-k8s-version-221000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-03-31 18:19:17 UTC, end at Fri 2023-03-31 18:37:19 UTC. --
	Mar 31 18:37:17 old-k8s-version-221000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 31 18:37:18 old-k8s-version-221000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Mar 31 18:37:18 old-k8s-version-221000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 31 18:37:18 old-k8s-version-221000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 31 18:37:18 old-k8s-version-221000 kubelet[24508]: I0331 18:37:18.314872   24508 server.go:410] Version: v1.16.0
	Mar 31 18:37:18 old-k8s-version-221000 kubelet[24508]: I0331 18:37:18.315146   24508 plugins.go:100] No cloud provider specified.
	Mar 31 18:37:18 old-k8s-version-221000 kubelet[24508]: I0331 18:37:18.315157   24508 server.go:773] Client rotation is on, will bootstrap in background
	Mar 31 18:37:18 old-k8s-version-221000 kubelet[24508]: I0331 18:37:18.316906   24508 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 31 18:37:18 old-k8s-version-221000 kubelet[24508]: W0331 18:37:18.317556   24508 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 31 18:37:18 old-k8s-version-221000 kubelet[24508]: W0331 18:37:18.317671   24508 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 31 18:37:18 old-k8s-version-221000 kubelet[24508]: F0331 18:37:18.317699   24508 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 31 18:37:18 old-k8s-version-221000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 31 18:37:18 old-k8s-version-221000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 31 18:37:18 old-k8s-version-221000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Mar 31 18:37:18 old-k8s-version-221000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 31 18:37:18 old-k8s-version-221000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 31 18:37:19 old-k8s-version-221000 kubelet[24529]: I0331 18:37:19.068647   24529 server.go:410] Version: v1.16.0
	Mar 31 18:37:19 old-k8s-version-221000 kubelet[24529]: I0331 18:37:19.068928   24529 plugins.go:100] No cloud provider specified.
	Mar 31 18:37:19 old-k8s-version-221000 kubelet[24529]: I0331 18:37:19.068966   24529 server.go:773] Client rotation is on, will bootstrap in background
	Mar 31 18:37:19 old-k8s-version-221000 kubelet[24529]: I0331 18:37:19.070940   24529 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 31 18:37:19 old-k8s-version-221000 kubelet[24529]: W0331 18:37:19.071625   24529 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 31 18:37:19 old-k8s-version-221000 kubelet[24529]: W0331 18:37:19.071693   24529 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 31 18:37:19 old-k8s-version-221000 kubelet[24529]: F0331 18:37:19.071719   24529 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 31 18:37:19 old-k8s-version-221000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 31 18:37:19 old-k8s-version-221000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:37:19.065038   23323 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 2 (417.354588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-221000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0331 11:37:29.791947    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:38:00.476617    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:38:27.119921    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:39:30.527312    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:40:32.500634    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:40:35.946032    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:40:53.580974    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:41:08.270713    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:41:19.580933    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:41:29.592352    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:41:55.553526    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:42:09.480883    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:42:29.779357    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:53601/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0331 11:43:00.464342    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:43:04.711423    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
E0331 11:43:04.717262    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
E0331 11:43:04.727442    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
E0331 11:43:04.747548    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
E0331 11:43:04.788243    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
E0331 11:43:04.868636    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
E0331 11:43:05.029651    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
E0331 11:43:05.350645    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
E0331 11:43:05.991761    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
E0331 11:43:07.273952    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:43:09.835412    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:43:14.956940    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:43:25.197916    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:43:27.107213    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:43:44.913541    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:43:45.679266    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:44:08.719409    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:44:26.639776    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:44:30.514919    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:44:32.634011    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:44:41.862336    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:45:32.487499    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:45:35.934437    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:45:48.558536    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/default-k8s-diff-port-594000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:46:03.504196    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:46:08.376738    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:46:19.684398    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0331 11:46:29.699478    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 2 (408.098262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-221000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-221000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-221000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (3.299µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-221000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-221000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-221000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c",
	        "Created": "2023-03-31T18:13:14.794492262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-31T18:19:17.579830346Z",
	            "FinishedAt": "2023-03-31T18:19:14.577555049Z"
	        },
	        "Image": "sha256:e2a21e2966a9bc54932b0177ccaaf147775c28fd6729fa50fc93f998eb5d1d4e",
	        "ResolvConfPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hostname",
	        "HostsPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/hosts",
	        "LogPath": "/var/lib/docker/containers/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c/0bb0a05e14047e05df95551f0074a3f51840d9971494dab21229cc308c9d620c-json.log",
	        "Name": "/old-k8s-version-221000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-221000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-221000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756-init/diff:/var/lib/docker/overlay2/c52de480a9d3b92156a6b7f40b9f29c89c00bff0dc7d6acec95d15bf8fa7e706/diff:/var/lib/docker/overlay2/4fa51510fbaca99d18589345b5c49ac647c3852526857e140385c8c74142d864/diff:/var/lib/docker/overlay2/cf9c75d0f98b371f5655e6f7a9422b077615b807b5ded6caad5cb3ade54a6bcf/diff:/var/lib/docker/overlay2/b5f4d681c4091990a5aedc7eba843be0d1f3bb91c8805b248e118c1a15aeb426/diff:/var/lib/docker/overlay2/726f8f99c4617031c8c79d131e446f30d959a0783880b92dd26264e6e07f578f/diff:/var/lib/docker/overlay2/008245a012fc592a94495b269e28d4d957b15a5f74e01a1fcbf876c5a4ba70d1/diff:/var/lib/docker/overlay2/e814b23972aa1481aab63bf91bb25741253bd9f081c67374e1f699c38c83e20b/diff:/var/lib/docker/overlay2/c78d407937cce04bc3c30a83fbf1b7d21b115b59f4095d751a209b86004e5084/diff:/var/lib/docker/overlay2/ee5f9401c2be285db119edbae2a99aed2dcb647e8d11cf47fca0347187d62e4b/diff:/var/lib/docker/overlay2/332be6
c4fb796c3c6b37a76dafc41ec8a1ba8e959fd7d71a94f827b6fb735ad4/diff:/var/lib/docker/overlay2/d6b9e54e2d5bb577d56e176337be5c5b76cd0af24af6644eb07287be261db26b/diff:/var/lib/docker/overlay2/2d0494381df049a5d91fa5ab305c5e51b253d3e85b0218e811be4f8356a37428/diff:/var/lib/docker/overlay2/afd882240733a5ed95e43e2142b7cb8a2b4d1326880618cc2324b03915020c4c/diff:/var/lib/docker/overlay2/33fa3700a4a4c49f289c610638ccb45ed575386bfb3064629f04c300421c4310/diff:/var/lib/docker/overlay2/e98659a1347b2114201116e62517d120bf4e0142318c89985a118ad2ae3e26a1/diff:/var/lib/docker/overlay2/5c7c035c89c1bcce5e2168402e485f3512a039670e3d6dd9fc3d76fb08f8244a/diff:/var/lib/docker/overlay2/ef4639b18525d48d115761fc8c9f0e9a4a49d9b1e2fe1dee9e17693200e24e74/diff:/var/lib/docker/overlay2/94b8ff4f6f12f9180a3bd2f3938f4cb57670fbb53250a7c3dbf644d1d1e6dea1/diff:/var/lib/docker/overlay2/58bf45edc67803e952c718a39796c16083117bdb5cf686e2f5854a023396b032/diff:/var/lib/docker/overlay2/822a54dff24a23d74321b1adf2e843efd31f58b4cedd73f9b2e3475250134d45/diff:/var/lib/d
ocker/overlay2/ee6c9457f9446cba6d6a0f198210c8beed12156fcdd969bc56f17417e918807d/diff:/var/lib/docker/overlay2/f95430109fc5db985ed6ca29cf75f665a17355890956c738d6d95c768cfbf69a/diff:/var/lib/docker/overlay2/cc22b7f9ebaea7002093337d464d55be8275142d31159d9ebdec3a9850a5f950/diff:/var/lib/docker/overlay2/b160c09d12608cec7a0042efb55263ffbdcb36ec0b0d73018e4cb4d726408d81/diff:/var/lib/docker/overlay2/5f6ff7fa8d97499d42cbb31528fe37f008f703abfcbbe973ed0d9f145c9d6039/diff:/var/lib/docker/overlay2/fabd7993133916925eded9ca46e7df8102d62c181ab0c081245d7c1ff1283c27/diff:/var/lib/docker/overlay2/2e6ba7aa5cb90faf1e555f4e520096483fbd232af03f3692ac51612714d0e385/diff:/var/lib/docker/overlay2/3a9104d80fb41426d356ca9e7fa94d0985824ceed9552e14890f18baccb9efa3/diff:/var/lib/docker/overlay2/9fb0d3a7b4b028d223f98735c60cf8066a223c50c202ee97ebe9d34a53f2513c/diff:/var/lib/docker/overlay2/06e8638a1f85e84a5044d94c8f64c3db2e4d2ec069d74632686067d7bb4b5172/diff:/var/lib/docker/overlay2/b986e2ce1a3377c222863b76fcfc811e9f7f3af845fae4a031c1be7034a
2db30/diff:/var/lib/docker/overlay2/8c61ff71163863f677f6c1cf8517ea53d543086afddf87fbcad9200e3d175b61/diff:/var/lib/docker/overlay2/5ce942778f0cdc742635ab8f4ee5aec345051ff4b67d4195e6aaa66c4aae4e14/diff:/var/lib/docker/overlay2/7d0a926a2580ecaf6b2aead105bb64b77d4837d3ea6e0c85cd95fdd3333f00bb/diff:/var/lib/docker/overlay2/d0d03db4cef8aaf8782b17d0626ad44f733ce7f04c3e21bf65084b97c6ea67cb/diff:/var/lib/docker/overlay2/e883b9d6436927d753216e62fe82d039db2f23ddcf499bf20a314e3430f7daef/diff:/var/lib/docker/overlay2/7718289812bceacd3143fbe5fd71a56482d7c577a981b237e8c007ff52731628/diff:/var/lib/docker/overlay2/acc3c766512d89dd02b3d3b06cbb9c7967ed29e4c8153cd9619d018c6b6de87a/diff:/var/lib/docker/overlay2/c7cab844f64ef8e73212fc5acb293faa5813da467c3228c985f682ec2cb30164/diff:/var/lib/docker/overlay2/753578deafc777ffa7c6d2a91835e011b17cb8336d09ba1adda245eccf3fbe12/diff:/var/lib/docker/overlay2/6a8e8d84668fb5b2eb3554aeb439a789e0efad3dde621d850c5c522256ea168c/diff:/var/lib/docker/overlay2/73847ab62012310cb9c6b55b335aa966ece312
b33e0dc4c7be39ab7733b4f1ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9371145efd9e65e4cf9bb0a81f4b673e60c5dc231a80c5de8008817807bc8756/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-221000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-221000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-221000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-221000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0456e7b7510bb75cc0d831a39cb0499c70c9c7a3e36cf7af9c3693387f85c05",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53597"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53598"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53599"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53600"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53601"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a0456e7b7510",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-221000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "0bb0a05e1404",
	                        "old-k8s-version-221000"
	                    ],
	                    "NetworkID": "1369008204ce2a861d531490c08c0f4f11e7797b90e56bf4d65905b433bee06b",
	                    "EndpointID": "298def5630fe6d14ed76667224bda0c3f5879d4b90bc4725c120d066e1d67a98",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 2 (403.976072ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-221000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-221000 logs -n 25: (3.379707662s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	| delete  | -p embed-certs-877000                                  | embed-certs-877000           | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	| delete  | -p                                                     | disable-driver-mounts-563000 | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:32 PDT |
	|         | disable-driver-mounts-563000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:32 PDT | 31 Mar 23 11:33 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-594000  | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:33 PDT | 31 Mar 23 11:33 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:33 PDT | 31 Mar 23 11:33 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-594000       | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:33 PDT | 31 Mar 23 11:33 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:33 PDT | 31 Mar 23 11:38 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:38 PDT | 31 Mar 23 11:38 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:38 PDT | 31 Mar 23 11:38 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:38 PDT | 31 Mar 23 11:38 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:38 PDT | 31 Mar 23 11:39 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-594000 | jenkins | v1.29.0 | 31 Mar 23 11:39 PDT | 31 Mar 23 11:39 PDT |
	|         | default-k8s-diff-port-594000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-822000 --memory=2200 --alsologtostderr   | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:39 PDT | 31 Mar 23 11:39 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.0-rc.0     |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-822000             | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:39 PDT | 31 Mar 23 11:39 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-822000                                   | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:39 PDT | 31 Mar 23 11:39 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-822000                  | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:39 PDT | 31 Mar 23 11:39 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-822000 --memory=2200 --alsologtostderr   | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:39 PDT | 31 Mar 23 11:40 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.0-rc.0     |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-822000 sudo                              | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:40 PDT | 31 Mar 23 11:40 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-822000                                   | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:40 PDT | 31 Mar 23 11:40 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-822000                                   | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:40 PDT | 31 Mar 23 11:40 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-822000                                   | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:40 PDT | 31 Mar 23 11:40 PDT |
	| delete  | -p newest-cni-822000                                   | newest-cni-822000            | jenkins | v1.29.0 | 31 Mar 23 11:40 PDT | 31 Mar 23 11:40 PDT |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 11:39:52
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 11:39:52.110663   23770 out.go:296] Setting OutFile to fd 1 ...
	I0331 11:39:52.110849   23770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:39:52.110854   23770 out.go:309] Setting ErrFile to fd 2...
	I0331 11:39:52.110858   23770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 11:39:52.110976   23770 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 11:39:52.112258   23770 out.go:303] Setting JSON to false
	I0331 11:39:52.132154   23770 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5960,"bootTime":1680282032,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 11:39:52.132350   23770 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 11:39:52.153654   23770 out.go:177] * [newest-cni-822000] minikube v1.29.0 on Darwin 13.3
	I0331 11:39:52.196978   23770 notify.go:220] Checking for updates...
	I0331 11:39:52.218770   23770 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 11:39:52.240000   23770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:39:52.262101   23770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 11:39:52.283852   23770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 11:39:52.305057   23770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 11:39:52.326783   23770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 11:39:52.348289   23770 config.go:182] Loaded profile config "newest-cni-822000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0331 11:39:52.348967   23770 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 11:39:52.412837   23770 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 11:39:52.412977   23770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:39:52.599366   23770 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-31 18:39:52.466578023 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:39:52.621410   23770 out.go:177] * Using the docker driver based on existing profile
	I0331 11:39:52.642886   23770 start.go:295] selected driver: docker
	I0331 11:39:52.642906   23770 start.go:859] validating driver "docker" against &{Name:newest-cni-822000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:newest-cni-822000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:39:52.643069   23770 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 11:39:52.647164   23770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 11:39:52.834744   23770 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-31 18:39:52.700680973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 11:39:52.834908   23770 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0331 11:39:52.834928   23770 cni.go:84] Creating CNI manager for ""
	I0331 11:39:52.834940   23770 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:39:52.834958   23770 start_flags.go:319] config:
	{Name:newest-cni-822000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:newest-cni-822000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:39:52.878467   23770 out.go:177] * Starting control plane node newest-cni-822000 in cluster newest-cni-822000
	I0331 11:39:52.899513   23770 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 11:39:52.921307   23770 out.go:177] * Pulling base image ...
	I0331 11:39:52.963380   23770 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 11:39:52.963419   23770 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 11:39:52.963466   23770 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0331 11:39:52.963485   23770 cache.go:57] Caching tarball of preloaded images
	I0331 11:39:52.963679   23770 preload.go:174] Found /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0331 11:39:52.963697   23770 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.0-rc.0 on docker
	I0331 11:39:52.964719   23770 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/config.json ...
	I0331 11:39:53.037529   23770 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon, skipping pull
	I0331 11:39:53.037551   23770 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in daemon, skipping load
	I0331 11:39:53.037571   23770 cache.go:193] Successfully downloaded all kic artifacts
	I0331 11:39:53.037617   23770 start.go:364] acquiring machines lock for newest-cni-822000: {Name:mk4ee3ba21812a453b9b414c3ff2595dad2514dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0331 11:39:53.037704   23770 start.go:368] acquired machines lock for "newest-cni-822000" in 65.771µs
	I0331 11:39:53.037736   23770 start.go:96] Skipping create...Using existing machine configuration
	I0331 11:39:53.037746   23770 fix.go:55] fixHost starting: 
	I0331 11:39:53.037982   23770 cli_runner.go:164] Run: docker container inspect newest-cni-822000 --format={{.State.Status}}
	I0331 11:39:53.100833   23770 fix.go:103] recreateIfNeeded on newest-cni-822000: state=Stopped err=<nil>
	W0331 11:39:53.100864   23770 fix.go:129] unexpected machine state, will restart: <nil>
	I0331 11:39:53.123045   23770 out.go:177] * Restarting existing docker container for "newest-cni-822000" ...
	I0331 11:39:53.143914   23770 cli_runner.go:164] Run: docker start newest-cni-822000
	I0331 11:39:53.496649   23770 cli_runner.go:164] Run: docker container inspect newest-cni-822000 --format={{.State.Status}}
	I0331 11:39:53.569400   23770 kic.go:426] container "newest-cni-822000" state is running.
	I0331 11:39:53.570080   23770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-822000
	I0331 11:39:53.642009   23770 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/config.json ...
	I0331 11:39:53.642457   23770 machine.go:88] provisioning docker machine ...
	I0331 11:39:53.642488   23770 ubuntu.go:169] provisioning hostname "newest-cni-822000"
	I0331 11:39:53.642561   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:53.720661   23770 main.go:141] libmachine: Using SSH client type: native
	I0331 11:39:53.721187   23770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54702 <nil> <nil>}
	I0331 11:39:53.721202   23770 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-822000 && echo "newest-cni-822000" | sudo tee /etc/hostname
	I0331 11:39:53.886927   23770 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-822000
	
	I0331 11:39:53.887039   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:53.950780   23770 main.go:141] libmachine: Using SSH client type: native
	I0331 11:39:53.951147   23770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54702 <nil> <nil>}
	I0331 11:39:53.951165   23770 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-822000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-822000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-822000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0331 11:39:54.086240   23770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:39:54.086262   23770 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16144-2324/.minikube CaCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16144-2324/.minikube}
	I0331 11:39:54.086284   23770 ubuntu.go:177] setting up certificates
	I0331 11:39:54.086293   23770 provision.go:83] configureAuth start
	I0331 11:39:54.086381   23770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-822000
	I0331 11:39:54.146816   23770 provision.go:138] copyHostCerts
	I0331 11:39:54.146904   23770 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem, removing ...
	I0331 11:39:54.146915   23770 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem
	I0331 11:39:54.147004   23770 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.pem (1078 bytes)
	I0331 11:39:54.147209   23770 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem, removing ...
	I0331 11:39:54.147217   23770 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem
	I0331 11:39:54.147276   23770 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/cert.pem (1123 bytes)
	I0331 11:39:54.147429   23770 exec_runner.go:144] found /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem, removing ...
	I0331 11:39:54.147435   23770 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem
	I0331 11:39:54.147496   23770 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16144-2324/.minikube/key.pem (1679 bytes)
	I0331 11:39:54.147622   23770 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem org=jenkins.newest-cni-822000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-822000]
	I0331 11:39:54.317878   23770 provision.go:172] copyRemoteCerts
	I0331 11:39:54.317953   23770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0331 11:39:54.318009   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:54.378850   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	I0331 11:39:54.475122   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0331 11:39:54.493384   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0331 11:39:54.511015   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0331 11:39:54.528638   23770 provision.go:86] duration metric: configureAuth took 442.351013ms
	I0331 11:39:54.528652   23770 ubuntu.go:193] setting minikube options for container-runtime
	I0331 11:39:54.528813   23770 config.go:182] Loaded profile config "newest-cni-822000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0331 11:39:54.528878   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:54.589986   23770 main.go:141] libmachine: Using SSH client type: native
	I0331 11:39:54.590328   23770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54702 <nil> <nil>}
	I0331 11:39:54.590340   23770 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0331 11:39:54.723683   23770 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0331 11:39:54.723697   23770 ubuntu.go:71] root file system type: overlay
	I0331 11:39:54.723815   23770 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0331 11:39:54.723894   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:54.784904   23770 main.go:141] libmachine: Using SSH client type: native
	I0331 11:39:54.785254   23770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54702 <nil> <nil>}
	I0331 11:39:54.785304   23770 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0331 11:39:54.929704   23770 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0331 11:39:54.929812   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:54.991315   23770 main.go:141] libmachine: Using SSH client type: native
	I0331 11:39:54.991659   23770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 54702 <nil> <nil>}
	I0331 11:39:54.991676   23770 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0331 11:39:55.130948   23770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0331 11:39:55.130966   23770 machine.go:91] provisioned docker machine in 1.488563299s
	I0331 11:39:55.130976   23770 start.go:300] post-start starting for "newest-cni-822000" (driver="docker")
	I0331 11:39:55.130983   23770 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0331 11:39:55.131071   23770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0331 11:39:55.131131   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:55.191928   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	I0331 11:39:55.289681   23770 ssh_runner.go:195] Run: cat /etc/os-release
	I0331 11:39:55.293391   23770 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0331 11:39:55.293406   23770 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0331 11:39:55.293413   23770 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0331 11:39:55.293420   23770 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0331 11:39:55.293429   23770 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/addons for local assets ...
	I0331 11:39:55.293517   23770 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16144-2324/.minikube/files for local assets ...
	I0331 11:39:55.293674   23770 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem -> 28002.pem in /etc/ssl/certs
	I0331 11:39:55.293833   23770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0331 11:39:55.301128   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:39:55.318306   23770 start.go:303] post-start completed in 187.328284ms
	I0331 11:39:55.318391   23770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 11:39:55.318455   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:55.379613   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	I0331 11:39:55.473330   23770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0331 11:39:55.478050   23770 fix.go:57] fixHost completed within 2.440403159s
	I0331 11:39:55.478064   23770 start.go:83] releasing machines lock for "newest-cni-822000", held for 2.440457217s
	I0331 11:39:55.478154   23770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-822000
	I0331 11:39:55.539606   23770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0331 11:39:55.539608   23770 ssh_runner.go:195] Run: cat /version.json
	I0331 11:39:55.539672   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:55.539689   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:55.603711   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	I0331 11:39:55.603896   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	W0331 11:39:55.746594   23770 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.30.0 -> Actual minikube version: v1.29.0
	I0331 11:39:55.746691   23770 ssh_runner.go:195] Run: systemctl --version
	I0331 11:39:55.751969   23770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0331 11:39:55.757505   23770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0331 11:39:55.774562   23770 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0331 11:39:55.774648   23770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0331 11:39:55.783042   23770 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0331 11:39:55.783061   23770 start.go:481] detecting cgroup driver to use...
	I0331 11:39:55.783074   23770 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:39:55.783155   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:39:55.796488   23770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0331 11:39:55.805307   23770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0331 11:39:55.814485   23770 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0331 11:39:55.814550   23770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0331 11:39:55.823091   23770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:39:55.831835   23770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0331 11:39:55.840821   23770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0331 11:39:55.849784   23770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0331 11:39:55.857737   23770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0331 11:39:55.866313   23770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0331 11:39:55.873647   23770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0331 11:39:55.880780   23770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:39:55.945538   23770 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0331 11:39:56.024350   23770 start.go:481] detecting cgroup driver to use...
	I0331 11:39:56.024382   23770 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0331 11:39:56.024451   23770 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0331 11:39:56.037216   23770 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0331 11:39:56.037283   23770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0331 11:39:56.047737   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0331 11:39:56.062094   23770 ssh_runner.go:195] Run: which cri-dockerd
	I0331 11:39:56.066290   23770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0331 11:39:56.074433   23770 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0331 11:39:56.104560   23770 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0331 11:39:56.164998   23770 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0331 11:39:56.256032   23770 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0331 11:39:56.256054   23770 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0331 11:39:56.269887   23770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:39:56.357908   23770 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0331 11:39:56.646027   23770 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 11:39:56.720102   23770 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0331 11:39:56.790142   23770 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0331 11:39:56.856629   23770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:39:56.922525   23770 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0331 11:39:56.934183   23770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0331 11:39:57.001055   23770 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0331 11:39:57.073637   23770 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0331 11:39:57.073741   23770 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0331 11:39:57.078464   23770 start.go:549] Will wait 60s for crictl version
	I0331 11:39:57.078526   23770 ssh_runner.go:195] Run: which crictl
	I0331 11:39:57.082461   23770 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0331 11:39:57.113173   23770 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.2
	RuntimeApiVersion:  v1alpha2
	I0331 11:39:57.134294   23770 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:39:57.160661   23770 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0331 11:39:57.208996   23770 out.go:204] * Preparing Kubernetes v1.27.0-rc.0 on Docker 23.0.2 ...
	I0331 11:39:57.209201   23770 cli_runner.go:164] Run: docker exec -t newest-cni-822000 dig +short host.docker.internal
	I0331 11:39:57.330046   23770 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0331 11:39:57.330184   23770 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0331 11:39:57.334663   23770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:39:57.344681   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:57.427436   23770 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0331 11:39:57.449301   23770 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 11:39:57.449466   23770 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:39:57.473026   23770 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.0-rc.0
	registry.k8s.io/kube-controller-manager:v1.27.0-rc.0
	registry.k8s.io/kube-scheduler:v1.27.0-rc.0
	registry.k8s.io/kube-proxy:v1.27.0-rc.0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 11:39:57.473050   23770 docker.go:569] Images already preloaded, skipping extraction
	I0331 11:39:57.473118   23770 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0331 11:39:57.493235   23770 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.0-rc.0
	registry.k8s.io/kube-proxy:v1.27.0-rc.0
	registry.k8s.io/kube-controller-manager:v1.27.0-rc.0
	registry.k8s.io/kube-scheduler:v1.27.0-rc.0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0331 11:39:57.493260   23770 cache_images.go:84] Images are preloaded, skipping loading
	I0331 11:39:57.493360   23770 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0331 11:39:57.519085   23770 cni.go:84] Creating CNI manager for ""
	I0331 11:39:57.519105   23770 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:39:57.519124   23770 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0331 11:39:57.519143   23770 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-822000 NodeName:newest-cni-822000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0331 11:39:57.519274   23770 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-822000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0331 11:39:57.519356   23770 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-822000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.0-rc.0 ClusterName:newest-cni-822000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0331 11:39:57.519426   23770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.0-rc.0
	I0331 11:39:57.527393   23770 binaries.go:44] Found k8s binaries, skipping transfer
	I0331 11:39:57.527451   23770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0331 11:39:57.534626   23770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0331 11:39:57.547728   23770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0331 11:39:57.560795   23770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0331 11:39:57.574011   23770 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0331 11:39:57.578000   23770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0331 11:39:57.587758   23770 certs.go:56] Setting up /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000 for IP: 192.168.67.2
	I0331 11:39:57.587776   23770 certs.go:186] acquiring lock for shared ca certs: {Name:mk1ddc355573fb6044e73c93dd0e9bf4bae32052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:39:57.587934   23770 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key
	I0331 11:39:57.587996   23770 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key
	I0331 11:39:57.588079   23770 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/client.key
	I0331 11:39:57.588137   23770 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/apiserver.key.c7fa3a9e
	I0331 11:39:57.588194   23770 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/proxy-client.key
	I0331 11:39:57.588393   23770 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem (1338 bytes)
	W0331 11:39:57.588430   23770 certs.go:397] ignoring /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800_empty.pem, impossibly tiny 0 bytes
	I0331 11:39:57.588443   23770 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca-key.pem (1679 bytes)
	I0331 11:39:57.588476   23770 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/ca.pem (1078 bytes)
	I0331 11:39:57.588510   23770 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/cert.pem (1123 bytes)
	I0331 11:39:57.588541   23770 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/certs/key.pem (1679 bytes)
	I0331 11:39:57.588610   23770 certs.go:401] found cert: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem (1708 bytes)
	I0331 11:39:57.589205   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0331 11:39:57.606666   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0331 11:39:57.624104   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0331 11:39:57.641348   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/newest-cni-822000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0331 11:39:57.659361   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0331 11:39:57.676717   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0331 11:39:57.694270   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0331 11:39:57.711518   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0331 11:39:57.728879   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/ssl/certs/28002.pem --> /usr/share/ca-certificates/28002.pem (1708 bytes)
	I0331 11:39:57.746238   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0331 11:39:57.763449   23770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16144-2324/.minikube/certs/2800.pem --> /usr/share/ca-certificates/2800.pem (1338 bytes)
	I0331 11:39:57.780767   23770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0331 11:39:57.794265   23770 ssh_runner.go:195] Run: openssl version
	I0331 11:39:57.799720   23770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0331 11:39:57.808024   23770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:39:57.812169   23770 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 31 17:21 /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:39:57.812223   23770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0331 11:39:57.817745   23770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0331 11:39:57.825564   23770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2800.pem && ln -fs /usr/share/ca-certificates/2800.pem /etc/ssl/certs/2800.pem"
	I0331 11:39:57.833694   23770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2800.pem
	I0331 11:39:57.837821   23770 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 31 17:26 /usr/share/ca-certificates/2800.pem
	I0331 11:39:57.837866   23770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2800.pem
	I0331 11:39:57.843199   23770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2800.pem /etc/ssl/certs/51391683.0"
	I0331 11:39:57.850727   23770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/28002.pem && ln -fs /usr/share/ca-certificates/28002.pem /etc/ssl/certs/28002.pem"
	I0331 11:39:57.858983   23770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/28002.pem
	I0331 11:39:57.863065   23770 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 31 17:26 /usr/share/ca-certificates/28002.pem
	I0331 11:39:57.863108   23770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/28002.pem
	I0331 11:39:57.868525   23770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/28002.pem /etc/ssl/certs/3ec20f2e.0"
	I0331 11:39:57.876153   23770 kubeadm.go:401] StartCluster: {Name:newest-cni-822000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:newest-cni-822000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 11:39:57.876263   23770 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:39:57.896483   23770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0331 11:39:57.904459   23770 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0331 11:39:57.904474   23770 kubeadm.go:633] restartCluster start
	I0331 11:39:57.904521   23770 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0331 11:39:57.911598   23770 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:39:57.911665   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:39:57.973341   23770 kubeconfig.go:135] verify returned: extract IP: "newest-cni-822000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:39:57.973493   23770 kubeconfig.go:146] "newest-cni-822000" context is missing from /Users/jenkins/minikube-integration/16144-2324/kubeconfig - will repair!
	I0331 11:39:57.973834   23770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/kubeconfig: {Name:mkc0b1389479e511140b6b42bee4e1f98dfd2b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:39:57.975254   23770 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0331 11:39:57.983656   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:39:57.983724   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:39:57.993071   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:39:58.495180   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:39:58.495309   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:39:58.506638   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:39:58.993884   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:39:58.994071   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:39:59.005342   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:39:59.493110   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:39:59.493200   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:39:59.502986   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:39:59.994631   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:39:59.994770   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:00.006349   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:00.494612   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:00.494796   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:00.505746   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:00.993850   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:00.993922   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:01.003732   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:01.495023   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:01.495237   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:01.506182   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:01.993820   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:01.993960   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:02.004908   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:02.494837   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:02.494941   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:02.504776   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:02.994253   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:02.994423   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:03.005646   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:03.494169   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:03.494292   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:03.505461   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:03.992893   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:03.992963   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:04.002169   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:04.493311   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:04.493493   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:04.504680   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:04.994134   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:04.994323   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:05.005616   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:05.492846   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:05.492975   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:05.502582   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:05.994793   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:05.994950   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:06.005993   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:06.494833   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:06.494994   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:06.506491   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:06.992785   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:06.992876   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:07.002508   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:07.493335   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:07.493453   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:07.504562   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:07.994819   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:07.994998   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:08.006560   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:08.006572   23770 api_server.go:165] Checking apiserver status ...
	I0331 11:40:08.006621   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0331 11:40:08.015118   23770 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:08.015130   23770 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0331 11:40:08.015140   23770 kubeadm.go:1120] stopping kube-system containers ...
	I0331 11:40:08.015213   23770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0331 11:40:08.036202   23770 docker.go:465] Stopping containers: [4438a73d4b20 4da112803b83 b9092ead5a32 be7848c9812d 138247a67222 88f514e98f67 f3d0a44821d5 3973ba842d1d 869b70431f26 e4bb830631ea 97f0d7778444 252845cf777a 4dbd5ec6639e 886e1f03c0ee c612073d4088 a184fc5c0cb2 30cb5f1be5b1]
	I0331 11:40:08.036293   23770 ssh_runner.go:195] Run: docker stop 4438a73d4b20 4da112803b83 b9092ead5a32 be7848c9812d 138247a67222 88f514e98f67 f3d0a44821d5 3973ba842d1d 869b70431f26 e4bb830631ea 97f0d7778444 252845cf777a 4dbd5ec6639e 886e1f03c0ee c612073d4088 a184fc5c0cb2 30cb5f1be5b1
	I0331 11:40:08.058494   23770 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0331 11:40:08.069078   23770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0331 11:40:08.077001   23770 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 31 18:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Mar 31 18:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Mar 31 18:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 31 18:39 /etc/kubernetes/scheduler.conf
	
	I0331 11:40:08.077064   23770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0331 11:40:08.084575   23770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0331 11:40:08.092015   23770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0331 11:40:08.099367   23770 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:08.099424   23770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0331 11:40:08.106600   23770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0331 11:40:08.113977   23770 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0331 11:40:08.114031   23770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0331 11:40:08.121215   23770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0331 11:40:08.128730   23770 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0331 11:40:08.128748   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:40:08.177555   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:40:08.685355   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:40:08.814125   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:40:08.865668   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:40:08.938757   23770 api_server.go:51] waiting for apiserver process to appear ...
	I0331 11:40:08.938833   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:40:09.497730   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:40:09.998510   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:40:10.010853   23770 api_server.go:71] duration metric: took 1.072145622s to wait for apiserver process to appear ...
	I0331 11:40:10.010867   23770 api_server.go:87] waiting for apiserver healthz status ...
	I0331 11:40:10.010878   23770 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54706/healthz ...
	I0331 11:40:10.012092   23770 api_server.go:268] stopped: https://127.0.0.1:54706/healthz: Get "https://127.0.0.1:54706/healthz": EOF
	I0331 11:40:10.512179   23770 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54706/healthz ...
	I0331 11:40:12.010399   23770 api_server.go:278] https://127.0.0.1:54706/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0331 11:40:12.010425   23770 api_server.go:102] status: https://127.0.0.1:54706/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0331 11:40:12.012451   23770 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54706/healthz ...
	I0331 11:40:12.034771   23770 api_server.go:278] https://127.0.0.1:54706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0331 11:40:12.034790   23770 api_server.go:102] status: https://127.0.0.1:54706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:40:12.512898   23770 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54706/healthz ...
	I0331 11:40:12.519643   23770 api_server.go:278] https://127.0.0.1:54706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0331 11:40:12.519659   23770 api_server.go:102] status: https://127.0.0.1:54706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:40:13.012078   23770 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54706/healthz ...
	I0331 11:40:13.017749   23770 api_server.go:278] https://127.0.0.1:54706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0331 11:40:13.017772   23770 api_server.go:102] status: https://127.0.0.1:54706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0331 11:40:13.512102   23770 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54706/healthz ...
	I0331 11:40:13.518072   23770 api_server.go:278] https://127.0.0.1:54706/healthz returned 200:
	ok
	I0331 11:40:13.526621   23770 api_server.go:140] control plane version: v1.27.0-rc.0
	I0331 11:40:13.526636   23770 api_server.go:130] duration metric: took 3.515914401s to wait for apiserver health ...
	I0331 11:40:13.526643   23770 cni.go:84] Creating CNI manager for ""
	I0331 11:40:13.526654   23770 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 11:40:13.567313   23770 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0331 11:40:13.588026   23770 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0331 11:40:13.602430   23770 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0331 11:40:13.618540   23770 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 11:40:13.626300   23770 system_pods.go:59] 8 kube-system pods found
	I0331 11:40:13.626322   23770 system_pods.go:61] "coredns-5d78c9869d-lhkfh" [e17aab4d-e28f-4fcf-a2a0-08588cc3fc72] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0331 11:40:13.626328   23770 system_pods.go:61] "etcd-newest-cni-822000" [683e1dbe-753f-419d-928e-fe0c60a49a09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0331 11:40:13.626350   23770 system_pods.go:61] "kube-apiserver-newest-cni-822000" [64dc0708-2b05-453c-8ac4-2272f48a76ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0331 11:40:13.626355   23770 system_pods.go:61] "kube-controller-manager-newest-cni-822000" [c7d2db6b-40ce-43e9-8fad-3ae7a4df7140] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0331 11:40:13.626364   23770 system_pods.go:61] "kube-proxy-wn6zc" [bd2f6308-a330-4981-8820-2ab39eb7cb7a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0331 11:40:13.626371   23770 system_pods.go:61] "kube-scheduler-newest-cni-822000" [66975078-034d-453c-98e5-4b33dd22c938] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0331 11:40:13.626379   23770 system_pods.go:61] "metrics-server-74d5c6b9c-m6782" [45599ed0-4a45-4d87-b173-0b518596b8dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0331 11:40:13.626384   23770 system_pods.go:61] "storage-provisioner" [9fb3751d-c4f6-4b3e-9b1b-73a20908688f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0331 11:40:13.626388   23770 system_pods.go:74] duration metric: took 7.835058ms to wait for pod list to return data ...
	I0331 11:40:13.626395   23770 node_conditions.go:102] verifying NodePressure condition ...
	I0331 11:40:13.629878   23770 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0331 11:40:13.629905   23770 node_conditions.go:123] node cpu capacity is 6
	I0331 11:40:13.629929   23770 node_conditions.go:105] duration metric: took 3.529352ms to run NodePressure ...
	I0331 11:40:13.629949   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0331 11:40:14.025103   23770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0331 11:40:14.040498   23770 ops.go:34] apiserver oom_adj: -16
	I0331 11:40:14.040520   23770 kubeadm.go:637] restartCluster took 16.136726087s
	I0331 11:40:14.040528   23770 kubeadm.go:403] StartCluster complete in 16.165072846s
	I0331 11:40:14.040546   23770 settings.go:142] acquiring lock: {Name:mk3cb9e1bd7c44f22a996c12a2b2b34c5bbc4ab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:40:14.040653   23770 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 11:40:14.041379   23770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/kubeconfig: {Name:mkc0b1389479e511140b6b42bee4e1f98dfd2b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 11:40:14.041626   23770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0331 11:40:14.041676   23770 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0331 11:40:14.041792   23770 config.go:182] Loaded profile config "newest-cni-822000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0331 11:40:14.041798   23770 addons.go:66] Setting storage-provisioner=true in profile "newest-cni-822000"
	I0331 11:40:14.041815   23770 addons.go:66] Setting dashboard=true in profile "newest-cni-822000"
	I0331 11:40:14.041838   23770 addons.go:228] Setting addon dashboard=true in "newest-cni-822000"
	I0331 11:40:14.041817   23770 addons.go:228] Setting addon storage-provisioner=true in "newest-cni-822000"
	W0331 11:40:14.041845   23770 addons.go:237] addon dashboard should already be in state true
	I0331 11:40:14.041818   23770 addons.go:66] Setting default-storageclass=true in profile "newest-cni-822000"
	W0331 11:40:14.041883   23770 addons.go:237] addon storage-provisioner should already be in state true
	I0331 11:40:14.041900   23770 host.go:66] Checking if "newest-cni-822000" exists ...
	I0331 11:40:14.041889   23770 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-822000"
	I0331 11:40:14.041823   23770 addons.go:66] Setting metrics-server=true in profile "newest-cni-822000"
	I0331 11:40:14.041962   23770 addons.go:228] Setting addon metrics-server=true in "newest-cni-822000"
	I0331 11:40:14.041964   23770 host.go:66] Checking if "newest-cni-822000" exists ...
	W0331 11:40:14.041970   23770 addons.go:237] addon metrics-server should already be in state true
	I0331 11:40:14.041996   23770 host.go:66] Checking if "newest-cni-822000" exists ...
	I0331 11:40:14.042233   23770 cli_runner.go:164] Run: docker container inspect newest-cni-822000 --format={{.State.Status}}
	I0331 11:40:14.042326   23770 cli_runner.go:164] Run: docker container inspect newest-cni-822000 --format={{.State.Status}}
	I0331 11:40:14.042370   23770 cli_runner.go:164] Run: docker container inspect newest-cni-822000 --format={{.State.Status}}
	I0331 11:40:14.043266   23770 cli_runner.go:164] Run: docker container inspect newest-cni-822000 --format={{.State.Status}}
	I0331 11:40:14.100766   23770 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-822000" context rescaled to 1 replicas
	I0331 11:40:14.100824   23770 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0331 11:40:14.122463   23770 out.go:177] * Verifying Kubernetes components...
	I0331 11:40:14.180208   23770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 11:40:14.205158   23770 addons.go:228] Setting addon default-storageclass=true in "newest-cni-822000"
	W0331 11:40:14.235363   23770 addons.go:237] addon default-storageclass should already be in state true
	I0331 11:40:14.235301   23770 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0331 11:40:14.235397   23770 host.go:66] Checking if "newest-cni-822000" exists ...
	I0331 11:40:14.235963   23770 cli_runner.go:164] Run: docker container inspect newest-cni-822000 --format={{.State.Status}}
	I0331 11:40:14.256553   23770 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0331 11:40:14.256572   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0331 11:40:14.277196   23770 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0331 11:40:14.298311   23770 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0331 11:40:14.298518   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:40:14.316310   23770 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0331 11:40:14.316365   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:40:14.340277   23770 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0331 11:40:14.361191   23770 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0331 11:40:14.361213   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0331 11:40:14.361362   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:40:14.398534   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0331 11:40:14.398552   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0331 11:40:14.399240   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:40:14.408603   23770 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0331 11:40:14.408626   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0331 11:40:14.408792   23770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-822000
	I0331 11:40:14.422818   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	I0331 11:40:14.425040   23770 api_server.go:51] waiting for apiserver process to appear ...
	I0331 11:40:14.425155   23770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 11:40:14.445633   23770 api_server.go:71] duration metric: took 344.78349ms to wait for apiserver process to appear ...
	I0331 11:40:14.445651   23770 api_server.go:87] waiting for apiserver healthz status ...
	I0331 11:40:14.445661   23770 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:54706/healthz ...
	I0331 11:40:14.490326   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	I0331 11:40:14.490765   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	I0331 11:40:14.493835   23770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54702 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/newest-cni-822000/id_rsa Username:docker}
	I0331 11:40:14.500882   23770 api_server.go:278] https://127.0.0.1:54706/healthz returned 200:
	ok
	I0331 11:40:14.502838   23770 api_server.go:140] control plane version: v1.27.0-rc.0
	I0331 11:40:14.502853   23770 api_server.go:130] duration metric: took 57.199166ms to wait for apiserver health ...
	I0331 11:40:14.502860   23770 system_pods.go:43] waiting for kube-system pods to appear ...
	I0331 11:40:14.509035   23770 system_pods.go:59] 8 kube-system pods found
	I0331 11:40:14.509053   23770 system_pods.go:61] "coredns-5d78c9869d-lhkfh" [e17aab4d-e28f-4fcf-a2a0-08588cc3fc72] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0331 11:40:14.509061   23770 system_pods.go:61] "etcd-newest-cni-822000" [683e1dbe-753f-419d-928e-fe0c60a49a09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0331 11:40:14.509073   23770 system_pods.go:61] "kube-apiserver-newest-cni-822000" [64dc0708-2b05-453c-8ac4-2272f48a76ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0331 11:40:14.509080   23770 system_pods.go:61] "kube-controller-manager-newest-cni-822000" [c7d2db6b-40ce-43e9-8fad-3ae7a4df7140] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0331 11:40:14.509086   23770 system_pods.go:61] "kube-proxy-wn6zc" [bd2f6308-a330-4981-8820-2ab39eb7cb7a] Running
	I0331 11:40:14.509092   23770 system_pods.go:61] "kube-scheduler-newest-cni-822000" [66975078-034d-453c-98e5-4b33dd22c938] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0331 11:40:14.509099   23770 system_pods.go:61] "metrics-server-74d5c6b9c-m6782" [45599ed0-4a45-4d87-b173-0b518596b8dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0331 11:40:14.509104   23770 system_pods.go:61] "storage-provisioner" [9fb3751d-c4f6-4b3e-9b1b-73a20908688f] Running
	I0331 11:40:14.509108   23770 system_pods.go:74] duration metric: took 6.241135ms to wait for pod list to return data ...
	I0331 11:40:14.509113   23770 default_sa.go:34] waiting for default service account to be created ...
	I0331 11:40:14.512137   23770 default_sa.go:45] found service account: "default"
	I0331 11:40:14.512155   23770 default_sa.go:55] duration metric: took 3.037405ms for default service account to be created ...
	I0331 11:40:14.512165   23770 kubeadm.go:578] duration metric: took 411.320014ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0331 11:40:14.512185   23770 node_conditions.go:102] verifying NodePressure condition ...
	I0331 11:40:14.516465   23770 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0331 11:40:14.516480   23770 node_conditions.go:123] node cpu capacity is 6
	I0331 11:40:14.516489   23770 node_conditions.go:105] duration metric: took 4.299969ms to run NodePressure ...
	I0331 11:40:14.516498   23770 start.go:228] waiting for startup goroutines ...
	I0331 11:40:14.593138   23770 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0331 11:40:14.593151   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0331 11:40:14.599018   23770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0331 11:40:14.606910   23770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0331 11:40:14.607879   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0331 11:40:14.607890   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0331 11:40:14.608062   23770 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0331 11:40:14.608073   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0331 11:40:14.623582   23770 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0331 11:40:14.623597   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0331 11:40:14.623927   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0331 11:40:14.623936   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0331 11:40:14.639245   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0331 11:40:14.639263   23770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0331 11:40:14.639263   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0331 11:40:14.704734   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0331 11:40:14.704750   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0331 11:40:14.721226   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0331 11:40:14.721246   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0331 11:40:14.796674   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0331 11:40:14.796697   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0331 11:40:14.837601   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0331 11:40:14.837620   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0331 11:40:14.911341   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0331 11:40:14.911357   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0331 11:40:14.926997   23770 addons.go:420] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0331 11:40:14.927013   23770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0331 11:40:14.944200   23770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0331 11:40:15.650841   23770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.051845395s)
	I0331 11:40:15.650877   23770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.043989575s)
	W0331 11:40:15.650895   23770 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR; received from peer
	I0331 11:40:15.650931   23770 retry.go:31] will retry after 307.399401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR; received from peer
	I0331 11:40:15.650934   23770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.011700512s)
	I0331 11:40:15.650948   23770 addons.go:464] Verifying addon metrics-server=true in "newest-cni-822000"
	I0331 11:40:15.845301   23770 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-822000 addons enable metrics-server	
	
	
	I0331 11:40:15.960522   23770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0331 11:40:16.191309   23770 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0331 11:40:16.233620   23770 addons.go:499] enable addons completed in 2.192046557s: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0331 11:40:16.233697   23770 start.go:233] waiting for cluster config update ...
	I0331 11:40:16.233717   23770 start.go:242] writing updated cluster config ...
	I0331 11:40:16.234347   23770 ssh_runner.go:195] Run: rm -f paused
	I0331 11:40:16.273788   23770 start.go:557] kubectl: 1.25.4, cluster: 1.27.0-rc.0 (minor skew: 2)
	I0331 11:40:16.294925   23770 out.go:177] 
	W0331 11:40:16.316063   23770 out.go:239] ! /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.27.0-rc.0.
	I0331 11:40:16.337554   23770 out.go:177]   - Want kubectl v1.27.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0331 11:40:16.359271   23770 out.go:177] * Done! kubectl is now configured to use "newest-cni-822000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Fri 2023-03-31 18:19:17 UTC, end at Fri 2023-03-31 18:46:31 UTC. --
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.224647123Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225192443Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225275873Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.225945996Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226028503Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226077622Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226092567Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226120653Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226137969Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226158434Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226172127Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226220646Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226362523Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226626084Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.226676606Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.227155872Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.558285544Z" level=info msg="Loading containers: start."
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.640881276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.675071947Z" level=info msg="Loading containers: done."
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.683436290Z" level=info msg="Docker daemon" commit=219f21b graphdriver=overlay2 version=23.0.2
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.683497721Z" level=info msg="Daemon has completed initialization"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.704168089Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 31 18:19:29 old-k8s-version-221000 systemd[1]: Started Docker Application Container Engine.
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.707476643Z" level=info msg="API listen on [::]:2376"
	Mar 31 18:19:29 old-k8s-version-221000 dockerd[867]: time="2023-03-31T18:19:29.715129950Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-03-31T18:46:33Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  18:46:34 up  1:45,  0 users,  load average: 0.28, 0.57, 0.94
	Linux old-k8s-version-221000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2023-03-31 18:19:17 UTC, end at Fri 2023-03-31 18:46:34 UTC. --
	Mar 31 18:46:32 old-k8s-version-221000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 31 18:46:32 old-k8s-version-221000 kubelet[34334]: I0331 18:46:32.666088   34334 server.go:410] Version: v1.16.0
	Mar 31 18:46:32 old-k8s-version-221000 kubelet[34334]: I0331 18:46:32.666412   34334 plugins.go:100] No cloud provider specified.
	Mar 31 18:46:32 old-k8s-version-221000 kubelet[34334]: I0331 18:46:32.666422   34334 server.go:773] Client rotation is on, will bootstrap in background
	Mar 31 18:46:32 old-k8s-version-221000 kubelet[34334]: I0331 18:46:32.668129   34334 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 31 18:46:32 old-k8s-version-221000 kubelet[34334]: W0331 18:46:32.668790   34334 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 31 18:46:32 old-k8s-version-221000 kubelet[34334]: W0331 18:46:32.668867   34334 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 31 18:46:32 old-k8s-version-221000 kubelet[34334]: F0331 18:46:32.668897   34334 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 31 18:46:32 old-k8s-version-221000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 31 18:46:32 old-k8s-version-221000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 31 18:46:33 old-k8s-version-221000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Mar 31 18:46:33 old-k8s-version-221000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 31 18:46:33 old-k8s-version-221000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 31 18:46:33 old-k8s-version-221000 kubelet[34346]: I0331 18:46:33.404273   34346 server.go:410] Version: v1.16.0
	Mar 31 18:46:33 old-k8s-version-221000 kubelet[34346]: I0331 18:46:33.404887   34346 plugins.go:100] No cloud provider specified.
	Mar 31 18:46:33 old-k8s-version-221000 kubelet[34346]: I0331 18:46:33.404921   34346 server.go:773] Client rotation is on, will bootstrap in background
	Mar 31 18:46:33 old-k8s-version-221000 kubelet[34346]: I0331 18:46:33.406736   34346 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 31 18:46:33 old-k8s-version-221000 kubelet[34346]: W0331 18:46:33.407618   34346 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 31 18:46:33 old-k8s-version-221000 kubelet[34346]: W0331 18:46:33.407743   34346 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 31 18:46:33 old-k8s-version-221000 kubelet[34346]: F0331 18:46:33.407804   34346 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 31 18:46:33 old-k8s-version-221000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 31 18:46:33 old-k8s-version-221000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 31 18:46:34 old-k8s-version-221000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Mar 31 18:46:34 old-k8s-version-221000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 31 18:46:34 old-k8s-version-221000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 11:46:33.885544   24332 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 2 (403.377862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-221000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.72s)

                                                
                                    

Test pass (283/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 22.85
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.26.3/json-events 22.89
11 TestDownloadOnly/v1.26.3/preload-exists 0
14 TestDownloadOnly/v1.26.3/kubectl 0
15 TestDownloadOnly/v1.26.3/LogsDuration 0.28
17 TestDownloadOnly/v1.27.0-rc.0/json-events 22.29
18 TestDownloadOnly/v1.27.0-rc.0/preload-exists 0
21 TestDownloadOnly/v1.27.0-rc.0/kubectl 0
22 TestDownloadOnly/v1.27.0-rc.0/LogsDuration 0.3
23 TestDownloadOnly/DeleteAll 0.69
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
25 TestDownloadOnlyKic 2.14
26 TestBinaryMirror 1.72
27 TestOffline 47.1
29 TestAddons/Setup 137.81
33 TestAddons/parallel/MetricsServer 5.64
34 TestAddons/parallel/HelmTiller 12.89
36 TestAddons/parallel/CSI 50.63
37 TestAddons/parallel/Headlamp 17.32
38 TestAddons/parallel/CloudSpanner 5.51
41 TestAddons/serial/GCPAuth/Namespaces 0.1
42 TestAddons/StoppedEnableDisable 11.52
43 TestCertOptions 28.24
44 TestCertExpiration 242.62
45 TestDockerFlags 31.43
46 TestForceSystemdFlag 27.03
47 TestForceSystemdEnv 30.89
49 TestHyperKitDriverInstallOrUpdate 6.73
53 TestErrorSpam/start 2.69
54 TestErrorSpam/status 1.29
55 TestErrorSpam/pause 1.79
56 TestErrorSpam/unpause 1.81
57 TestErrorSpam/stop 11.46
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 89.31
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 40.53
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 7.39
69 TestFunctional/serial/CacheCmd/cache/add_local 1.66
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
71 TestFunctional/serial/CacheCmd/cache/list 0.07
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.78
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.52
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.71
77 TestFunctional/serial/ExtraConfig 46.46
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 3.2
80 TestFunctional/serial/LogsFileCmd 3.03
82 TestFunctional/parallel/ConfigCmd 0.4
83 TestFunctional/parallel/DashboardCmd 9.72
84 TestFunctional/parallel/DryRun 1.59
85 TestFunctional/parallel/InternationalLanguage 0.72
86 TestFunctional/parallel/StatusCmd 1.26
91 TestFunctional/parallel/AddonsCmd 0.24
92 TestFunctional/parallel/PersistentVolumeClaim 27.58
94 TestFunctional/parallel/SSHCmd 0.82
95 TestFunctional/parallel/CpCmd 2.05
96 TestFunctional/parallel/MySQL 26.68
97 TestFunctional/parallel/FileSync 0.51
98 TestFunctional/parallel/CertSync 2.65
102 TestFunctional/parallel/NodeLabels 0.07
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
106 TestFunctional/parallel/License 0.76
107 TestFunctional/parallel/Version/short 0.11
108 TestFunctional/parallel/Version/components 1.05
109 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
110 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
111 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
112 TestFunctional/parallel/ImageCommands/ImageListYaml 0.37
113 TestFunctional/parallel/ImageCommands/ImageBuild 3.89
114 TestFunctional/parallel/ImageCommands/Setup 2.55
115 TestFunctional/parallel/DockerEnv/bash 1.86
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.42
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.37
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.49
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.49
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.89
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.09
123 TestFunctional/parallel/ImageCommands/ImageRemove 0.75
124 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.89
125 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.54
126 TestFunctional/parallel/ServiceCmd/DeployApp 20.13
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.2
132 TestFunctional/parallel/ServiceCmd/List 0.63
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
134 TestFunctional/parallel/ServiceCmd/HTTPS 15
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
141 TestFunctional/parallel/ServiceCmd/Format 15
142 TestFunctional/parallel/ServiceCmd/URL 15
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
144 TestFunctional/parallel/ProfileCmd/profile_list 0.48
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
146 TestFunctional/parallel/MountCmd/any-port 10.11
147 TestFunctional/parallel/MountCmd/specific-port 2.73
148 TestFunctional/delete_addon-resizer_images 0.15
149 TestFunctional/delete_my-image_image 0.06
150 TestFunctional/delete_minikube_cached_images 0.06
154 TestImageBuild/serial/NormalBuild 2.35
155 TestImageBuild/serial/BuildWithBuildArg 0.96
156 TestImageBuild/serial/BuildWithDockerIgnore 0.47
157 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.41
167 TestJSONOutput/start/Command 41.68
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.6
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.58
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 10.87
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.75
192 TestKicCustomNetwork/create_custom_network 26.53
193 TestKicCustomNetwork/use_default_bridge_network 26.58
194 TestKicExistingNetwork 27.15
195 TestKicCustomSubnet 26.44
196 TestKicStaticIP 27.19
197 TestMainNoArgs 0.07
198 TestMinikubeProfile 56.31
201 TestMountStart/serial/StartWithMountFirst 8.41
202 TestMountStart/serial/VerifyMountFirst 0.41
203 TestMountStart/serial/StartWithMountSecond 8.73
204 TestMountStart/serial/VerifyMountSecond 0.41
205 TestMountStart/serial/DeleteFirst 2.22
206 TestMountStart/serial/VerifyMountPostDelete 0.41
207 TestMountStart/serial/Stop 1.58
208 TestMountStart/serial/RestartStopped 6.32
209 TestMountStart/serial/VerifyMountPostStop 0.41
212 TestMultiNode/serial/FreshStart2Nodes 73.09
213 TestMultiNode/serial/DeployApp2Nodes 44.39
214 TestMultiNode/serial/PingHostFrom2Pods 0.85
215 TestMultiNode/serial/AddNode 19.66
216 TestMultiNode/serial/ProfileList 0.46
217 TestMultiNode/serial/CopyFile 14.72
218 TestMultiNode/serial/StopNode 3.09
219 TestMultiNode/serial/StartAfterStop 10.64
220 TestMultiNode/serial/RestartKeepsNodes 87.22
221 TestMultiNode/serial/DeleteNode 6.27
222 TestMultiNode/serial/StopMultiNode 22
223 TestMultiNode/serial/RestartMultiNode 71.81
224 TestMultiNode/serial/ValidateNameConflict 29.01
228 TestPreload 163.88
230 TestScheduledStopUnix 99.06
231 TestSkaffold 63.67
233 TestInsufficientStorage 14.69
249 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 12.64
250 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 17.36
251 TestStoppedBinaryUpgrade/Setup 4.41
253 TestStoppedBinaryUpgrade/MinikubeLogs 3.49
255 TestPause/serial/Start 42.79
256 TestPause/serial/SecondStartNoReconfiguration 40.34
257 TestPause/serial/Pause 0.64
258 TestPause/serial/VerifyStatus 0.42
259 TestPause/serial/Unpause 0.67
260 TestPause/serial/PauseAgain 0.72
261 TestPause/serial/DeletePaused 2.64
262 TestPause/serial/VerifyDeletedResources 0.57
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.43
272 TestNoKubernetes/serial/StartWithK8s 25.84
273 TestNoKubernetes/serial/StartWithStopK8s 18.22
274 TestNoKubernetes/serial/Start 7.45
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
276 TestNoKubernetes/serial/ProfileList 1.41
277 TestNoKubernetes/serial/Stop 1.63
278 TestNoKubernetes/serial/StartNoArgs 5.37
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
280 TestNetworkPlugins/group/auto/Start 45.53
281 TestNetworkPlugins/group/auto/KubeletFlags 0.42
282 TestNetworkPlugins/group/auto/NetCatPod 12.2
283 TestNetworkPlugins/group/auto/DNS 0.13
284 TestNetworkPlugins/group/auto/Localhost 0.12
285 TestNetworkPlugins/group/auto/HairPin 0.12
286 TestNetworkPlugins/group/kindnet/Start 55.08
287 TestNetworkPlugins/group/calico/Start 70.48
288 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
289 TestNetworkPlugins/group/kindnet/KubeletFlags 0.49
290 TestNetworkPlugins/group/kindnet/NetCatPod 12.49
291 TestNetworkPlugins/group/kindnet/DNS 0.13
292 TestNetworkPlugins/group/kindnet/Localhost 0.13
293 TestNetworkPlugins/group/kindnet/HairPin 0.12
294 TestNetworkPlugins/group/custom-flannel/Start 57.9
295 TestNetworkPlugins/group/calico/ControllerPod 5.02
296 TestNetworkPlugins/group/calico/KubeletFlags 0.41
297 TestNetworkPlugins/group/calico/NetCatPod 13.21
298 TestNetworkPlugins/group/calico/DNS 0.13
299 TestNetworkPlugins/group/calico/Localhost 0.12
300 TestNetworkPlugins/group/calico/HairPin 0.11
301 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.47
302 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.26
303 TestNetworkPlugins/group/false/Start 42.79
304 TestNetworkPlugins/group/custom-flannel/DNS 0.16
305 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
306 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
307 TestNetworkPlugins/group/enable-default-cni/Start 44.19
308 TestNetworkPlugins/group/false/KubeletFlags 0.42
309 TestNetworkPlugins/group/false/NetCatPod 13.21
310 TestNetworkPlugins/group/false/DNS 0.15
311 TestNetworkPlugins/group/false/Localhost 0.12
312 TestNetworkPlugins/group/false/HairPin 0.11
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.54
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.33
315 TestNetworkPlugins/group/flannel/Start 56.05
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
319 TestNetworkPlugins/group/bridge/Start 43.39
320 TestNetworkPlugins/group/flannel/ControllerPod 5.02
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
322 TestNetworkPlugins/group/flannel/NetCatPod 11.2
323 TestNetworkPlugins/group/flannel/DNS 0.13
324 TestNetworkPlugins/group/flannel/Localhost 0.13
325 TestNetworkPlugins/group/flannel/HairPin 0.11
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.44
327 TestNetworkPlugins/group/bridge/NetCatPod 11.21
328 TestNetworkPlugins/group/bridge/DNS 0.14
329 TestNetworkPlugins/group/bridge/Localhost 0.12
330 TestNetworkPlugins/group/bridge/HairPin 0.12
331 TestNetworkPlugins/group/kubenet/Start 52.84
334 TestNetworkPlugins/group/kubenet/KubeletFlags 0.42
335 TestNetworkPlugins/group/kubenet/NetCatPod 17.23
336 TestNetworkPlugins/group/kubenet/DNS 0.13
337 TestNetworkPlugins/group/kubenet/Localhost 0.12
338 TestNetworkPlugins/group/kubenet/HairPin 0.11
340 TestStartStop/group/no-preload/serial/FirstStart 66.7
341 TestStartStop/group/no-preload/serial/DeployApp 14.29
342 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
343 TestStartStop/group/no-preload/serial/Stop 10.89
344 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
345 TestStartStop/group/no-preload/serial/SecondStart 305.12
348 TestStartStop/group/old-k8s-version/serial/Stop 1.59
349 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 17.02
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.47
354 TestStartStop/group/no-preload/serial/Pause 3.29
356 TestStartStop/group/embed-certs/serial/FirstStart 42.63
357 TestStartStop/group/embed-certs/serial/DeployApp 10.33
358 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.83
359 TestStartStop/group/embed-certs/serial/Stop 11
360 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.37
361 TestStartStop/group/embed-certs/serial/SecondStart 557.85
363 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
364 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
365 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
366 TestStartStop/group/embed-certs/serial/Pause 3.18
368 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.62
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.91
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.37
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 309.34
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.45
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.25
380 TestStartStop/group/newest-cni/serial/FirstStart 38.07
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
383 TestStartStop/group/newest-cni/serial/Stop 11.07
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.37
385 TestStartStop/group/newest-cni/serial/SecondStart 24.78
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
389 TestStartStop/group/newest-cni/serial/Pause 3.14
x
+
TestDownloadOnly/v1.16.0/json-events (22.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (22.849687638s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (22.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-557000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-557000: exit status 85 (281.717012ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-557000 | jenkins | v1.29.0 | 31 Mar 23 10:19 PDT |          |
	|         | -p download-only-557000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 10:19:55
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 10:19:55.312413    2804 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:19:55.312718    2804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:19:55.312725    2804 out.go:309] Setting ErrFile to fd 2...
	I0331 10:19:55.312742    2804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:19:55.312898    2804 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	W0331 10:19:55.313046    2804 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16144-2324/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16144-2324/.minikube/config/config.json: no such file or directory
	I0331 10:19:55.314975    2804 out.go:303] Setting JSON to true
	I0331 10:19:55.335012    2804 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1163,"bootTime":1680282032,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 10:19:55.335098    2804 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 10:19:55.358484    2804 out.go:97] [download-only-557000] minikube v1.29.0 on Darwin 13.3
	I0331 10:19:55.358741    2804 notify.go:220] Checking for updates...
	W0331 10:19:55.358745    2804 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball: no such file or directory
	I0331 10:19:55.379233    2804 out.go:169] MINIKUBE_LOCATION=16144
	I0331 10:19:55.400288    2804 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 10:19:55.422463    2804 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 10:19:55.444483    2804 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 10:19:55.466312    2804 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	W0331 10:19:55.509237    2804 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0331 10:19:55.509596    2804 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 10:19:55.572795    2804 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 10:19:55.572916    2804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:19:55.769725    2804 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:46 SystemTime:2023-03-31 17:19:55.626073754 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:19:55.791029    2804 out.go:97] Using the docker driver based on user configuration
	I0331 10:19:55.791068    2804 start.go:295] selected driver: docker
	I0331 10:19:55.791078    2804 start.go:859] validating driver "docker" against <nil>
	I0331 10:19:55.791298    2804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:19:55.979028    2804 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:46 SystemTime:2023-03-31 17:19:55.844270773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:19:55.979141    2804 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0331 10:19:55.983606    2804 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0331 10:19:55.983769    2804 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0331 10:19:56.005832    2804 out.go:169] Using Docker Desktop driver with root privileges
	I0331 10:19:56.027794    2804 cni.go:84] Creating CNI manager for ""
	I0331 10:19:56.027828    2804 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0331 10:19:56.027843    2804 start_flags.go:319] config:
	{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-557000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:19:56.049565    2804 out.go:97] Starting control plane node download-only-557000 in cluster download-only-557000
	I0331 10:19:56.049638    2804 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 10:19:56.071582    2804 out.go:97] Pulling base image ...
	I0331 10:19:56.071723    2804 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 10:19:56.071734    2804 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 10:19:56.130488    2804 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 to local cache
	I0331 10:19:56.130728    2804 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local cache directory
	I0331 10:19:56.130845    2804 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 to local cache
	I0331 10:19:56.184437    2804 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0331 10:19:56.184482    2804 cache.go:57] Caching tarball of preloaded images
	I0331 10:19:56.184879    2804 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 10:19:56.206901    2804 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0331 10:19:56.206933    2804 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:19:56.415532    2804 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0331 10:20:09.106231    2804 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:20:09.106370    2804 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:20:09.708575    2804 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0331 10:20:09.708781    2804 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/download-only-557000/config.json ...
	I0331 10:20:09.708808    2804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/download-only-557000/config.json: {Name:mkfe859a9132babc8ab853092ef12e7ae36fd3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0331 10:20:09.709056    2804 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0331 10:20:09.709317    2804 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-557000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/json-events (22.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.26.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.26.3 --container-runtime=docker --driver=docker : (22.890288025s)
--- PASS: TestDownloadOnly/v1.26.3/json-events (22.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/preload-exists
--- PASS: TestDownloadOnly/v1.26.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/kubectl
--- PASS: TestDownloadOnly/v1.26.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-557000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-557000: exit status 85 (278.022765ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-557000 | jenkins | v1.29.0 | 31 Mar 23 10:19 PDT |          |
	|         | -p download-only-557000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-557000 | jenkins | v1.29.0 | 31 Mar 23 10:20 PDT |          |
	|         | -p download-only-557000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 10:20:18
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 10:20:18.444840    2853 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:20:18.444992    2853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:20:18.444998    2853 out.go:309] Setting ErrFile to fd 2...
	I0331 10:20:18.445002    2853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:20:18.445109    2853 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	W0331 10:20:18.445205    2853 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16144-2324/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16144-2324/.minikube/config/config.json: no such file or directory
	I0331 10:20:18.446418    2853 out.go:303] Setting JSON to true
	I0331 10:20:18.466574    2853 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1186,"bootTime":1680282032,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 10:20:18.466739    2853 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 10:20:18.488011    2853 out.go:97] [download-only-557000] minikube v1.29.0 on Darwin 13.3
	I0331 10:20:18.488268    2853 notify.go:220] Checking for updates...
	I0331 10:20:18.510114    2853 out.go:169] MINIKUBE_LOCATION=16144
	I0331 10:20:18.531273    2853 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 10:20:18.553220    2853 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 10:20:18.575208    2853 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 10:20:18.596351    2853 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	W0331 10:20:18.639141    2853 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0331 10:20:18.639809    2853 config.go:182] Loaded profile config "download-only-557000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0331 10:20:18.639903    2853 start.go:767] api.Load failed for download-only-557000: filestore "download-only-557000": Docker machine "download-only-557000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0331 10:20:18.639987    2853 driver.go:365] Setting default libvirt URI to qemu:///system
	W0331 10:20:18.640026    2853 start.go:767] api.Load failed for download-only-557000: filestore "download-only-557000": Docker machine "download-only-557000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0331 10:20:18.703351    2853 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 10:20:18.703478    2853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:20:18.901392    2853 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:46 SystemTime:2023-03-31 17:20:18.755878849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:20:18.923328    2853 out.go:97] Using the docker driver based on existing profile
	I0331 10:20:18.923388    2853 start.go:295] selected driver: docker
	I0331 10:20:18.923399    2853 start.go:859] validating driver "docker" against &{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-557000 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:20:18.923700    2853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:20:19.116508    2853 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:46 SystemTime:2023-03-31 17:20:18.977603157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:20:19.119209    2853 cni.go:84] Creating CNI manager for ""
	I0331 10:20:19.119240    2853 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 10:20:19.119257    2853 start_flags.go:319] config:
	{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:download-only-557000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:20:19.141719    2853 out.go:97] Starting control plane node download-only-557000 in cluster download-only-557000
	I0331 10:20:19.141780    2853 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 10:20:19.162943    2853 out.go:97] Pulling base image ...
	I0331 10:20:19.163003    2853 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 10:20:19.163097    2853 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 10:20:19.221352    2853 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 to local cache
	I0331 10:20:19.221511    2853 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local cache directory
	I0331 10:20:19.221533    2853 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local cache directory, skipping pull
	I0331 10:20:19.221540    2853 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in cache, skipping pull
	I0331 10:20:19.221548    2853 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 as a tarball
	I0331 10:20:19.250185    2853 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.3/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0331 10:20:19.250229    2853 cache.go:57] Caching tarball of preloaded images
	I0331 10:20:19.250551    2853 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 10:20:19.273003    2853 out.go:97] Downloading Kubernetes v1.26.3 preload ...
	I0331 10:20:19.273036    2853 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:20:19.475492    2853 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.3/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4?checksum=md5:b698631b54adb014b111f0258a79e081 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0331 10:20:36.667918    2853 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:20:36.668058    2853 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:20:37.269611    2853 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.3 on docker
	I0331 10:20:37.269746    2853 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/download-only-557000/config.json ...
	I0331 10:20:37.270066    2853 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0331 10:20:37.270344    2853 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/darwin/amd64/v1.26.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-557000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.3/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/json-events (22.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.27.0-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-557000 --force --alsologtostderr --kubernetes-version=v1.27.0-rc.0 --container-runtime=docker --driver=docker : (22.286784516s)
--- PASS: TestDownloadOnly/v1.27.0-rc.0/json-events (22.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.27.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.27.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-557000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-557000: exit status 85 (294.193454ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-557000 | jenkins | v1.29.0 | 31 Mar 23 10:19 PDT |          |
	|         | -p download-only-557000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-557000 | jenkins | v1.29.0 | 31 Mar 23 10:20 PDT |          |
	|         | -p download-only-557000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.3      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-557000 | jenkins | v1.29.0 | 31 Mar 23 10:20 PDT |          |
	|         | -p download-only-557000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/31 10:20:41
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0331 10:20:41.615437    2904 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:20:41.615624    2904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:20:41.615630    2904 out.go:309] Setting ErrFile to fd 2...
	I0331 10:20:41.615634    2904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:20:41.615746    2904 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	W0331 10:20:41.615836    2904 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16144-2324/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16144-2324/.minikube/config/config.json: no such file or directory
	I0331 10:20:41.617097    2904 out.go:303] Setting JSON to true
	I0331 10:20:41.637666    2904 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1209,"bootTime":1680282032,"procs":378,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 10:20:41.637755    2904 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 10:20:41.659182    2904 out.go:97] [download-only-557000] minikube v1.29.0 on Darwin 13.3
	I0331 10:20:41.659281    2904 notify.go:220] Checking for updates...
	I0331 10:20:41.680117    2904 out.go:169] MINIKUBE_LOCATION=16144
	I0331 10:20:41.701089    2904 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 10:20:41.722322    2904 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 10:20:41.743349    2904 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 10:20:41.764199    2904 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	W0331 10:20:41.806388    2904 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0331 10:20:41.807044    2904 config.go:182] Loaded profile config "download-only-557000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	W0331 10:20:41.807130    2904 start.go:767] api.Load failed for download-only-557000: filestore "download-only-557000": Docker machine "download-only-557000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0331 10:20:41.807223    2904 driver.go:365] Setting default libvirt URI to qemu:///system
	W0331 10:20:41.807257    2904 start.go:767] api.Load failed for download-only-557000: filestore "download-only-557000": Docker machine "download-only-557000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0331 10:20:41.871894    2904 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 10:20:41.872003    2904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:20:42.060543    2904 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:46 SystemTime:2023-03-31 17:20:41.923217082 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:20:42.082162    2904 out.go:97] Using the docker driver based on existing profile
	I0331 10:20:42.082278    2904 start.go:295] selected driver: docker
	I0331 10:20:42.082286    2904 start.go:859] validating driver "docker" against &{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:download-only-557000 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:20:42.082590    2904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:20:42.271428    2904 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:46 SystemTime:2023-03-31 17:20:42.136565586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:20:42.274109    2904 cni.go:84] Creating CNI manager for ""
	I0331 10:20:42.274129    2904 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0331 10:20:42.274145    2904 start_flags.go:319] config:
	{Name:download-only-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:download-only-557000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:20:42.295462    2904 out.go:97] Starting control plane node download-only-557000 in cluster download-only-557000
	I0331 10:20:42.295512    2904 cache.go:120] Beginning downloading kic base image for docker with docker
	I0331 10:20:42.317383    2904 out.go:97] Pulling base image ...
	I0331 10:20:42.317485    2904 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 10:20:42.317556    2904 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local docker daemon
	I0331 10:20:42.375021    2904 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 to local cache
	I0331 10:20:42.375202    2904 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local cache directory
	I0331 10:20:42.375235    2904 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 in local cache directory, skipping pull
	I0331 10:20:42.375241    2904 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 exists in cache, skipping pull
	I0331 10:20:42.375250    2904 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 as a tarball
	I0331 10:20:42.401784    2904 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.0-rc.0/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0331 10:20:42.401824    2904 cache.go:57] Caching tarball of preloaded images
	I0331 10:20:42.402166    2904 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 10:20:42.424000    2904 out.go:97] Downloading Kubernetes v1.27.0-rc.0 preload ...
	I0331 10:20:42.424096    2904 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:20:42.637973    2904 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.0-rc.0/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:6096a776168534014d2f50b9988b2d60 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0331 10:20:57.394331    2904 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:20:57.394518    2904 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0331 10:20:57.986325    2904 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.0-rc.0 on docker
	I0331 10:20:57.986458    2904 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/download-only-557000/config.json ...
	I0331 10:20:57.986847    2904 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0331 10:20:57.987218    2904 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.0-rc.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.0-rc.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/16144-2324/.minikube/cache/darwin/amd64/v1.27.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-557000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.0-rc.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.69s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-557000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.14s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-535000 --alsologtostderr --driver=docker 
aaa_download_only_test.go:226: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-535000 --alsologtostderr --driver=docker : (1.038143102s)
helpers_test.go:175: Cleaning up "download-docker-535000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-535000
--- PASS: TestDownloadOnlyKic (2.14s)

                                                
                                    
x
+
TestBinaryMirror (1.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-040000 --alsologtostderr --binary-mirror http://127.0.0.1:49362 --driver=docker 
aaa_download_only_test.go:308: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-040000 --alsologtostderr --binary-mirror http://127.0.0.1:49362 --driver=docker : (1.096623124s)
helpers_test.go:175: Cleaning up "binary-mirror-040000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-040000
--- PASS: TestBinaryMirror (1.72s)

                                                
                                    
x
+
TestOffline (47.1s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-867000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-867000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (44.048771767s)
helpers_test.go:175: Cleaning up "offline-docker-867000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-867000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-867000: (3.04623306s)
--- PASS: TestOffline (47.10s)

                                                
                                    
x
+
TestAddons/Setup (137.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-841000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-841000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m17.812692389s)
--- PASS: TestAddons/Setup (137.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: metrics-server stabilized in 2.449217ms
addons_test.go:384: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-6588d95b98-qmlvh" [e0a598f8-abe2-45e1-9b56-32240dde9848] Running
addons_test.go:384: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.022253105s
addons_test.go:390: (dbg) Run:  kubectl --context addons-841000 top pods -n kube-system
addons_test.go:407: (dbg) Run:  out/minikube-darwin-amd64 -p addons-841000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.64s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.89s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:431: tiller-deploy stabilized in 2.952716ms
addons_test.go:433: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-pjbqr" [bb400ae0-b0e3-40e7-8e4a-b7fa4262b283] Running
addons_test.go:433: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008515451s
addons_test.go:448: (dbg) Run:  kubectl --context addons-841000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:448: (dbg) Done: kubectl --context addons-841000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.355970598s)
addons_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 -p addons-841000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.89s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: csi-hostpath-driver pods stabilized in 4.857447ms
addons_test.go:539: (dbg) Run:  kubectl --context addons-841000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:549: (dbg) Run:  kubectl --context addons-841000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [60f28a40-5161-433e-bb09-cab3e4959cf2] Pending
helpers_test.go:344: "task-pv-pod" [60f28a40-5161-433e-bb09-cab3e4959cf2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [60f28a40-5161-433e-bb09-cab3e4959cf2] Running
addons_test.go:554: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.008292988s
addons_test.go:559: (dbg) Run:  kubectl --context addons-841000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-841000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-841000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-841000 delete pod task-pv-pod
addons_test.go:575: (dbg) Run:  kubectl --context addons-841000 delete pvc hpvc
addons_test.go:581: (dbg) Run:  kubectl --context addons-841000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-841000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:591: (dbg) Run:  kubectl --context addons-841000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:596: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fcf02285-6285-48d7-acbc-a33674df2be9] Pending
helpers_test.go:344: "task-pv-pod-restore" [fcf02285-6285-48d7-acbc-a33674df2be9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fcf02285-6285-48d7-acbc-a33674df2be9] Running
addons_test.go:596: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00976264s
addons_test.go:601: (dbg) Run:  kubectl --context addons-841000 delete pod task-pv-pod-restore
addons_test.go:605: (dbg) Run:  kubectl --context addons-841000 delete pvc hpvc-restore
addons_test.go:609: (dbg) Run:  kubectl --context addons-841000 delete volumesnapshot new-snapshot-demo
addons_test.go:613: (dbg) Run:  out/minikube-darwin-amd64 -p addons-841000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:613: (dbg) Done: out/minikube-darwin-amd64 -p addons-841000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.499650532s)
addons_test.go:617: (dbg) Run:  out/minikube-darwin-amd64 -p addons-841000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:799: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-841000 --alsologtostderr -v=1
addons_test.go:799: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-841000 --alsologtostderr -v=1: (2.283519645s)
addons_test.go:804: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58c48fc87f-66m4f" [b48c40f3-9cb7-4ad7-98a7-41f801220418] Pending
helpers_test.go:344: "headlamp-58c48fc87f-66m4f" [b48c40f3-9cb7-4ad7-98a7-41f801220418] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58c48fc87f-66m4f" [b48c40f3-9cb7-4ad7-98a7-41f801220418] Running
addons_test.go:804: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.031355582s
--- PASS: TestAddons/parallel/Headlamp (17.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:820: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5dd65ff88c-8st6x" [eb10c222-27a1-45b3-9620-275a65dc990a] Running
addons_test.go:820: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006506407s
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-841000
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:625: (dbg) Run:  kubectl --context addons-841000 create ns new-namespace
addons_test.go:639: (dbg) Run:  kubectl --context addons-841000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-841000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-841000: (10.973397436s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-841000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-841000
addons_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-841000
--- PASS: TestAddons/StoppedEnableDisable (11.52s)

                                                
                                    
x
+
TestCertOptions (28.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-821000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-821000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (24.704319062s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-821000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-821000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-821000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-821000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-821000: (2.655987596s)
--- PASS: TestCertOptions (28.24s)

                                                
                                    
x
+
TestCertExpiration (242.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-298000 --memory=2048 --cert-expiration=3m --driver=docker 
E0331 10:58:27.148588    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-298000 --memory=2048 --cert-expiration=3m --driver=docker : (27.812267646s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-298000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0331 11:01:29.851700    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:01:30.184708    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 11:01:40.091402    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:02:00.572476    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-298000 --memory=2048 --cert-expiration=8760h --driver=docker : (32.115887773s)
helpers_test.go:175: Cleaning up "cert-expiration-298000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-298000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-298000: (2.695362821s)
--- PASS: TestCertExpiration (242.62s)

                                                
                                    
x
+
TestDockerFlags (31.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-629000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-629000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (27.845565983s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-629000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-629000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-629000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-629000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-629000: (2.655571947s)
--- PASS: TestDockerFlags (31.43s)

                                                
                                    
x
+
TestForceSystemdFlag (27.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-882000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-882000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (23.285685177s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-882000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-882000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-882000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-882000: (3.284374147s)
--- PASS: TestForceSystemdFlag (27.03s)

                                                
                                    
x
+
TestForceSystemdEnv (30.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-444000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-444000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (27.505189789s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-444000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-444000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-444000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-444000: (2.851602151s)
--- PASS: TestForceSystemdEnv (30.89s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.73s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.73s)

                                                
                                    
x
+
TestErrorSpam/start (2.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 start --dry-run
--- PASS: TestErrorSpam/start (2.69s)

                                                
                                    
x
+
TestErrorSpam/status (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 status
--- PASS: TestErrorSpam/status (1.29s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (11.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 stop: (10.821717722s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-497000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-497000 stop
--- PASS: TestErrorSpam/stop (11.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/16144-2324/.minikube/files/etc/test/nested/copy/2800/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2229: (dbg) Done: out/minikube-darwin-amd64 start -p functional-281000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m29.312165771s)
--- PASS: TestFunctional/serial/StartWithProxy (89.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-darwin-amd64 start -p functional-281000 --alsologtostderr -v=8: (40.526598464s)
functional_test.go:658: soft start took 40.52704814s for "functional-281000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-281000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:3.1: (2.686666005s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:3.3: (2.522482282s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache add registry.k8s.io/pause:latest: (2.185119431s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-281000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local2662449991/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache add minikube-local-cache-test:functional-281000
functional_test.go:1084: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache add minikube-local-cache-test:functional-281000: (1.187563338s)
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache delete minikube-local-cache-test:functional-281000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-281000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0331 10:28:27.296401    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:27.302223    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (394.992365ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cache reload
E0331 10:28:27.313058    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:27.333318    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:27.373936    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:27.454525    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:27.614591    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:27.934974    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:28.575147    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
functional_test.go:1153: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 cache reload: (1.538293349s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 kubectl -- --context functional-281000 get pods
E0331 10:28:29.855566    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-281000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0331 10:28:32.415668    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:37.537876    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:28:47.778851    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 10:29:08.258667    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-darwin-amd64 start -p functional-281000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.456091499s)
functional_test.go:756: restart took 46.456285099s for "functional-281000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-281000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 logs
functional_test.go:1231: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 logs: (3.200967788s)
--- PASS: TestFunctional/serial/LogsCmd (3.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd586650858/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd586650858/001/logs.txt: (3.027202207s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 config get cpus: exit status 14 (44.032943ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 config get cpus: exit status 14 (42.098687ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-281000 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-281000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 5440: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-281000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (749.037887ms)

                                                
                                                
-- stdout --
	* [functional-281000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 10:30:54.233870    5354 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:30:54.234033    5354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:30:54.234038    5354 out.go:309] Setting ErrFile to fd 2...
	I0331 10:30:54.234042    5354 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:30:54.234163    5354 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 10:30:54.235428    5354 out.go:303] Setting JSON to false
	I0331 10:30:54.255566    5354 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1822,"bootTime":1680282032,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 10:30:54.255660    5354 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 10:30:54.262941    5354 out.go:177] * [functional-281000] minikube v1.29.0 on Darwin 13.3
	I0331 10:30:54.304662    5354 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 10:30:54.304674    5354 notify.go:220] Checking for updates...
	I0331 10:30:54.346525    5354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 10:30:54.367602    5354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 10:30:54.388417    5354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 10:30:54.409568    5354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 10:30:54.430523    5354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 10:30:54.451641    5354 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 10:30:54.451974    5354 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 10:30:54.515881    5354 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 10:30:54.516011    5354 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:30:54.703701    5354 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-31 17:30:54.569130704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:30:54.762285    5354 out.go:177] * Using the docker driver based on existing profile
	I0331 10:30:54.800067    5354 start.go:295] selected driver: docker
	I0331 10:30:54.800094    5354 start.go:859] validating driver "docker" against &{Name:functional-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:functional-281000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:30:54.800190    5354 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 10:30:54.823948    5354 out.go:177] 
	W0331 10:30:54.861325    5354 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0331 10:30:54.883236    5354 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-281000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-281000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (720.41831ms)

                                                
                                                
-- stdout --
	* [functional-281000] minikube v1.29.0 sur Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 10:30:55.823016    5397 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:30:55.823161    5397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:30:55.823166    5397 out.go:309] Setting ErrFile to fd 2...
	I0331 10:30:55.823170    5397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:30:55.823293    5397 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 10:30:55.824770    5397 out.go:303] Setting JSON to false
	I0331 10:30:55.845309    5397 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1823,"bootTime":1680282032,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0331 10:30:55.845398    5397 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0331 10:30:55.869578    5397 out.go:177] * [functional-281000] minikube v1.29.0 sur Darwin 13.3
	I0331 10:30:55.911501    5397 notify.go:220] Checking for updates...
	I0331 10:30:55.911509    5397 out.go:177]   - MINIKUBE_LOCATION=16144
	I0331 10:30:55.969286    5397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	I0331 10:30:55.990785    5397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0331 10:30:56.012571    5397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0331 10:30:56.033387    5397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	I0331 10:30:56.054701    5397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0331 10:30:56.076231    5397 config.go:182] Loaded profile config "functional-281000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 10:30:56.076868    5397 driver.go:365] Setting default libvirt URI to qemu:///system
	I0331 10:30:56.143357    5397 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0331 10:30:56.143476    5397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0331 10:30:56.333723    5397 info.go:266] docker info: {ID:7LJT:2NJA:NXZQ:FWAT:KIW7:M2WK:LGEH:GQAG:65D4:V5IZ:QKDO:7KKX Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-31 17:30:56.197918813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231724032 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0331 10:30:56.376264    5397 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0331 10:30:56.397281    5397 start.go:295] selected driver: docker
	I0331 10:30:56.397306    5397 start.go:859] validating driver "docker" against &{Name:functional-281000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.38@sha256:516db0892e1cd79b6781fc1a102fca4bf392576bbf3ca0fa01a467cb6cc0af55 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:functional-281000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0331 10:30:56.397450    5397 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0331 10:30:56.422247    5397 out.go:177] 
	W0331 10:30:56.444370    5397 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0331 10:30:56.465337    5397 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 status
functional_test.go:855: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0a4abe94-1f45-4f7a-acb5-429b1c7e40ac] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012198883s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-281000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-281000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [be7091ec-ffff-432c-8135-2e6433c7b83a] Pending
helpers_test.go:344: "sp-pod" [be7091ec-ffff-432c-8135-2e6433c7b83a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [be7091ec-ffff-432c-8135-2e6433c7b83a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.008321819s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-281000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-281000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [617f1ccd-dd57-40dc-ba33-22872a8961ac] Pending
helpers_test.go:344: "sp-pod" [617f1ccd-dd57-40dc-ba33-22872a8961ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [617f1ccd-dd57-40dc-ba33-22872a8961ac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00952442s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-281000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh -n functional-281000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 cp functional-281000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd3618115105/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh -n functional-281000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-281000 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-pb28m" [9ab0f0d7-bf09-4931-a7b1-bb09e4ab07d4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-pb28m" [9ab0f0d7-bf09-4931-a7b1-bb09e4ab07d4] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.01172841s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-281000 exec mysql-888f84dd9-pb28m -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-281000 exec mysql-888f84dd9-pb28m -- mysql -ppassword -e "show databases;": exit status 1 (169.641719ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-281000 exec mysql-888f84dd9-pb28m -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-281000 exec mysql-888f84dd9-pb28m -- mysql -ppassword -e "show databases;": exit status 1 (117.582212ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-281000 exec mysql-888f84dd9-pb28m -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-281000 exec mysql-888f84dd9-pb28m -- mysql -ppassword -e "show databases;": exit status 1 (124.851588ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-281000 exec mysql-888f84dd9-pb28m -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.68s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/2800/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/test/nested/copy/2800/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/2800.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/ssl/certs/2800.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/2800.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /usr/share/ca-certificates/2800.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/28002.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/ssl/certs/28002.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/28002.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /usr/share/ca-certificates/28002.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-281000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "sudo systemctl is-active crio": exit status 1 (577.849625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 version -o=json --components
functional_test.go:2265: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 version -o=json --components: (1.052541876s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image ls --format short:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-281000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-281000
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls --format table
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-281000 | a3bd23ac25959 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.26.3           | 5a79047369329 | 56.4MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-apiserver              | v1.26.3           | 1d9b3cbae03ce | 134MB  |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| gcr.io/google-containers/addon-resizer      | functional-281000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/mysql                     | 5.7               | 8aea3fb7309a3 | 455MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.3           | ce8c2293ef09c | 123MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | 8e75cbc5b25c8 | 41MB   |
| docker.io/library/nginx                     | latest            | 080ed0ed8312d | 142MB  |
| registry.k8s.io/kube-proxy                  | v1.26.3           | 92ed2bec97a63 | 65.6MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image ls --format json:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ce8c2293ef09c9987773345638026f9f7aed16bc52e7a6ea507f0c655ab17161","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.3"],"size":"123000000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d4
8bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"5a79047369329dff4a02e705e650664d2019e583b802416447a6a17e9debb62d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.3"],"size":"56400000"},{"id":"92ed2bec97a637010666d6c4aa4d69b672baec0fd5d236d142e4227a3a0557d8","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.3"],"size":"65599999"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"a3bd23ac25959135737dd69209eef12212354315736dabc4bef88906629b8720","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-281000"],"size":"30"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["regist
ry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"1d9b3cbae03cea2a1766cfa5bf06a5a9c7a7bdbc6f5322756e29ac78e76f2708","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.3"],"size":"134000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-281000"],"size":"32900000"},{"id":"8e75cbc5b25c8438fcfe2e7c12c98409d5f161cbb668d6c444e02796691ada70","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"8aea3fb7309a304def7ce3018a44b4f732de4decea4fba7e7520ff703bc5135c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image ls --format yaml:
- id: a3bd23ac25959135737dd69209eef12212354315736dabc4bef88906629b8720
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-281000
size: "30"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 8aea3fb7309a304def7ce3018a44b4f732de4decea4fba7e7520ff703bc5135c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: ce8c2293ef09c9987773345638026f9f7aed16bc52e7a6ea507f0c655ab17161
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.3
size: "123000000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 8e75cbc5b25c8438fcfe2e7c12c98409d5f161cbb668d6c444e02796691ada70
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41000000"
- id: 1d9b3cbae03cea2a1766cfa5bf06a5a9c7a7bdbc6f5322756e29ac78e76f2708
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.3
size: "134000000"
- id: 5a79047369329dff4a02e705e650664d2019e583b802416447a6a17e9debb62d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.3
size: "56400000"
- id: 92ed2bec97a637010666d6c4aa4d69b672baec0fd5d236d142e4227a3a0557d8
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.3
size: "65599999"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-281000
size: "32900000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh pgrep buildkitd: exit status 1 (397.502469ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image build -t localhost/my-image:functional-281000 testdata/build
2023/03/31 10:31:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image build -t localhost/my-image:functional-281000 testdata/build: (3.184767305s)
functional_test.go:318: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-281000 image build -t localhost/my-image:functional-281000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 6fe3e06525d8
Removing intermediate container 6fe3e06525d8
---> a526c4d7b8d3
Step 3/3 : ADD content.txt /
---> c8db28d71c2c
Successfully built c8db28d71c2c
Successfully tagged localhost/my-image:functional-281000
functional_test.go:321: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-281000 image build -t localhost/my-image:functional-281000 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.482037109s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-281000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-281000 docker-env) && out/minikube-darwin-amd64 status -p functional-281000"
functional_test.go:494: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-281000 docker-env) && out/minikube-darwin-amd64 status -p functional-281000": (1.211186685s)
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-281000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:353: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000: (3.191462009s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000: (2.09848515s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.473680144s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:243: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image load --daemon gcr.io/google-containers/addon-resizer:functional-281000: (3.940103623s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image save gcr.io/google-containers/addon-resizer:functional-281000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:378: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image save gcr.io/google-containers/addon-resizer:functional-281000 /Users/jenkins/workspace/addon-resizer-save.tar: (2.088610838s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image rm gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:407: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.576963191s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 image save --daemon gcr.io/google-containers/addon-resizer:functional-281000
functional_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p functional-281000 image save --daemon gcr.io/google-containers/addon-resizer:functional-281000: (2.4178532s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-281000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-281000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-281000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7f895565f7-dtpbm" [ca641788-e155-42f9-83fd-27ef2a9c23bc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0331 10:29:49.218424    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-7f895565f7-dtpbm" [ca641788-e155-42f9-83fd-27ef2a9c23bc] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.00798002s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 5049: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-281000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c974ac29-c0f4-477e-92f0-ec7e9a5742f0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c974ac29-c0f4-477e-92f0-ec7e9a5742f0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.0081597s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service list -o json
functional_test.go:1492: Took "629.547887ms" to run "out/minikube-darwin-amd64 -p functional-281000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service --namespace=default --https --url hello-node
functional_test.go:1507: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 service --namespace=default --https --url hello-node: signal: killed (15.001471713s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50138

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1520: found endpoint: https://127.0.0.1:50138
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-281000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-281000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 5078: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service hello-node --url --format={{.IP}}
functional_test.go:1538: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 service hello-node --url --format={{.IP}}: signal: killed (15.001336938s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 service hello-node --url
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 service hello-node --url: signal: killed (15.001749618s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50184

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1563: found endpoint for hello-node: http://127.0.0.1:50184
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1313: Took "420.691467ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1327: Took "63.299046ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1364: Took "421.212622ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1377: Took "64.933284ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2216639111/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1680283848666445000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2216639111/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1680283848666445000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2216639111/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1680283848666445000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2216639111/001/test-1680283848666445000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.247756ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 31 17:30 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 31 17:30 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 31 17:30 test-1680283848666445000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh cat /mount-9p/test-1680283848666445000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-281000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4b332008-1bca-4f5b-95b0-9ce428e5df8a] Pending
helpers_test.go:344: "busybox-mount" [4b332008-1bca-4f5b-95b0-9ce428e5df8a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4b332008-1bca-4f5b-95b0-9ce428e5df8a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4b332008-1bca-4f5b-95b0-9ce428e5df8a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007420904s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-281000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2216639111/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3672881141/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.383742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3672881141/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-281000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-281000 ssh "sudo umount -f /mount-9p": exit status 1 (512.926126ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-281000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-281000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3672881141/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.73s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-281000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-281000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-281000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-194000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-194000: (2.352981466s)
--- PASS: TestImageBuild/serial/NormalBuild (2.35s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-194000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-194000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-194000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-446000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-446000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (41.674875732s)
--- PASS: TestJSONOutput/start/Command (41.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-446000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-446000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-446000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-446000 --output=json --user=testUser: (10.874184858s)
--- PASS: TestJSONOutput/stop/Command (10.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.75s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-482000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-482000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (347.332916ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c4f65eb-2697-466e-ba49-0ff1b089e311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-482000] minikube v1.29.0 on Darwin 13.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a45b8447-c353-48cb-8ab4-44f796b5661d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16144"}}
	{"specversion":"1.0","id":"b4aa85ca-a08b-446b-80f6-7a68024fb8bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig"}}
	{"specversion":"1.0","id":"c2493143-4f33-47fc-b0cc-50071b3ad406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"46ff62e2-d02a-41d5-89a6-1cdd00db50da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f408c9d2-31ee-48fe-8d58-f84aea7c0215","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube"}}
	{"specversion":"1.0","id":"c4400774-59fc-4e3a-9915-169b58308f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c9767c72-9cbb-4c06-b13b-57a904ba9928","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-482000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-482000
--- PASS: TestErrorJSONOutput (0.75s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-445000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-445000 --network=: (23.865649485s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-445000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-445000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-445000: (2.60001327s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-369000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-369000 --network=bridge: (24.017447117s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-369000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-369000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-369000: (2.503105677s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.58s)

                                                
                                    
x
+
TestKicExistingNetwork (27.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-184000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-184000 --network=existing-network: (24.288610382s)
helpers_test.go:175: Cleaning up "existing-network-184000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-184000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-184000: (2.476299545s)
--- PASS: TestKicExistingNetwork (27.15s)

                                                
                                    
x
+
TestKicCustomSubnet (26.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-472000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-472000 --subnet=192.168.60.0/24: (23.959029734s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-472000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-472000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-472000: (2.424199024s)
--- PASS: TestKicCustomSubnet (26.44s)

                                                
                                    
x
+
TestKicStaticIP (27.19s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-935000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-935000 --static-ip=192.168.200.200: (24.319133425s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-935000 ip
helpers_test.go:175: Cleaning up "static-ip-935000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-935000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-935000: (2.63214315s)
--- PASS: TestKicStaticIP (27.19s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (56.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-259000 --driver=docker 
E0331 10:43:27.174221    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-259000 --driver=docker : (24.801641738s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-261000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-261000 --driver=docker : (24.438540208s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-259000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-261000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-261000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-261000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-261000: (2.634126381s)
helpers_test.go:175: Cleaning up "first-259000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-259000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-259000: (2.613141431s)
--- PASS: TestMinikubeProfile (56.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-755000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-755000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.404901985s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-755000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-769000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-769000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.729251507s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-769000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-755000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-755000 --alsologtostderr -v=5: (2.220250489s)
--- PASS: TestMountStart/serial/DeleteFirst (2.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-769000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-769000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-769000: (1.579744332s)
--- PASS: TestMountStart/serial/Stop (1.58s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-769000
E0331 10:44:30.583819    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-769000: (5.320240811s)
--- PASS: TestMountStart/serial/RestartStopped (6.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-769000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-663000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0331 10:44:50.215217    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-663000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m12.238543949s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (44.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-663000 -- rollout status deployment/busybox: (3.486012598s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-7kc6t -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-z8rnh -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-7kc6t -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-z8rnh -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-7kc6t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-z8rnh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (44.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-7kc6t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-7kc6t -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-z8rnh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-663000 -- exec busybox-6b86dd6d48-z8rnh -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-663000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-663000 -v 3 --alsologtostderr: (18.484006617s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr
multinode_test.go:116: (dbg) Done: out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr: (1.176770657s)
--- PASS: TestMultiNode/serial/AddNode (19.66s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Done: out/minikube-darwin-amd64 -p multinode-663000 status --output json --alsologtostderr: (1.015569416s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp testdata/cp-test.txt multinode-663000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile328592954/001/cp-test_multinode-663000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000:/home/docker/cp-test.txt multinode-663000-m02:/home/docker/cp-test_multinode-663000_multinode-663000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m02 "sudo cat /home/docker/cp-test_multinode-663000_multinode-663000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000:/home/docker/cp-test.txt multinode-663000-m03:/home/docker/cp-test_multinode-663000_multinode-663000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m03 "sudo cat /home/docker/cp-test_multinode-663000_multinode-663000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp testdata/cp-test.txt multinode-663000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile328592954/001/cp-test_multinode-663000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000-m02:/home/docker/cp-test.txt multinode-663000:/home/docker/cp-test_multinode-663000-m02_multinode-663000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000 "sudo cat /home/docker/cp-test_multinode-663000-m02_multinode-663000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000-m02:/home/docker/cp-test.txt multinode-663000-m03:/home/docker/cp-test_multinode-663000-m02_multinode-663000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m03 "sudo cat /home/docker/cp-test_multinode-663000-m02_multinode-663000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp testdata/cp-test.txt multinode-663000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile328592954/001/cp-test_multinode-663000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000-m03:/home/docker/cp-test.txt multinode-663000:/home/docker/cp-test_multinode-663000-m03_multinode-663000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000 "sudo cat /home/docker/cp-test_multinode-663000-m03_multinode-663000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 cp multinode-663000-m03:/home/docker/cp-test.txt multinode-663000-m02:/home/docker/cp-test_multinode-663000-m03_multinode-663000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 ssh -n multinode-663000-m02 "sudo cat /home/docker/cp-test_multinode-663000-m03_multinode-663000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-663000 node stop m03: (1.523310041s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-663000 status: exit status 7 (806.486383ms)

                                                
                                                
-- stdout --
	multinode-663000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-663000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-663000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr: exit status 7 (764.642606ms)

                                                
                                                
-- stdout --
	multinode-663000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-663000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-663000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 10:47:13.318090    9312 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:47:13.318279    9312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:47:13.318286    9312 out.go:309] Setting ErrFile to fd 2...
	I0331 10:47:13.318290    9312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:47:13.318408    9312 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 10:47:13.318608    9312 out.go:303] Setting JSON to false
	I0331 10:47:13.318631    9312 mustload.go:65] Loading cluster: multinode-663000
	I0331 10:47:13.318679    9312 notify.go:220] Checking for updates...
	I0331 10:47:13.318948    9312 config.go:182] Loaded profile config "multinode-663000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 10:47:13.318961    9312 status.go:255] checking status of multinode-663000 ...
	I0331 10:47:13.319376    9312 cli_runner.go:164] Run: docker container inspect multinode-663000 --format={{.State.Status}}
	I0331 10:47:13.380598    9312 status.go:330] multinode-663000 host status = "Running" (err=<nil>)
	I0331 10:47:13.380623    9312 host.go:66] Checking if "multinode-663000" exists ...
	I0331 10:47:13.380875    9312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-663000
	I0331 10:47:13.441956    9312 host.go:66] Checking if "multinode-663000" exists ...
	I0331 10:47:13.442219    9312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 10:47:13.442281    9312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-663000
	I0331 10:47:13.502028    9312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50687 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/multinode-663000/id_rsa Username:docker}
	I0331 10:47:13.594646    9312 ssh_runner.go:195] Run: systemctl --version
	I0331 10:47:13.599071    9312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 10:47:13.608491    9312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-663000
	I0331 10:47:13.669159    9312 kubeconfig.go:92] found "multinode-663000" server: "https://127.0.0.1:50686"
	I0331 10:47:13.669182    9312 api_server.go:165] Checking apiserver status ...
	I0331 10:47:13.669222    9312 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0331 10:47:13.679398    9312 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1965/cgroup
	W0331 10:47:13.687634    9312 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1965/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0331 10:47:13.687694    9312 ssh_runner.go:195] Run: ls
	I0331 10:47:13.691676    9312 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:50686/healthz ...
	I0331 10:47:13.696792    9312 api_server.go:278] https://127.0.0.1:50686/healthz returned 200:
	ok
	I0331 10:47:13.696805    9312 status.go:421] multinode-663000 apiserver status = Running (err=<nil>)
	I0331 10:47:13.696815    9312 status.go:257] multinode-663000 status: &{Name:multinode-663000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0331 10:47:13.696830    9312 status.go:255] checking status of multinode-663000-m02 ...
	I0331 10:47:13.697063    9312 cli_runner.go:164] Run: docker container inspect multinode-663000-m02 --format={{.State.Status}}
	I0331 10:47:13.757657    9312 status.go:330] multinode-663000-m02 host status = "Running" (err=<nil>)
	I0331 10:47:13.757678    9312 host.go:66] Checking if "multinode-663000-m02" exists ...
	I0331 10:47:13.757955    9312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-663000-m02
	I0331 10:47:13.818782    9312 host.go:66] Checking if "multinode-663000-m02" exists ...
	I0331 10:47:13.819030    9312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0331 10:47:13.819080    9312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-663000-m02
	I0331 10:47:13.879296    9312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50725 SSHKeyPath:/Users/jenkins/minikube-integration/16144-2324/.minikube/machines/multinode-663000-m02/id_rsa Username:docker}
	I0331 10:47:13.969427    9312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0331 10:47:13.979005    9312 status.go:257] multinode-663000-m02 status: &{Name:multinode-663000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0331 10:47:13.979029    9312 status.go:255] checking status of multinode-663000-m03 ...
	I0331 10:47:13.979298    9312 cli_runner.go:164] Run: docker container inspect multinode-663000-m03 --format={{.State.Status}}
	I0331 10:47:14.039533    9312 status.go:330] multinode-663000-m03 host status = "Stopped" (err=<nil>)
	I0331 10:47:14.039561    9312 status.go:343] host is not running, skipping remaining checks
	I0331 10:47:14.039572    9312 status.go:257] multinode-663000-m03 status: &{Name:multinode-663000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-663000 node start m03 --alsologtostderr: (9.516131092s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status
multinode_test.go:261: (dbg) Done: out/minikube-darwin-amd64 -p multinode-663000 status: (1.000109644s)
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-663000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-663000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-663000: (23.231686331s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-663000 --wait=true -v=8 --alsologtostderr
E0331 10:48:27.160963    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-663000 --wait=true -v=8 --alsologtostderr: (1m3.896711423s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-663000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-663000 node delete m03: (5.32742929s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 stop
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-663000 stop: (21.678609919s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-663000 status: exit status 7 (164.021116ms)

                                                
                                                
-- stdout --
	multinode-663000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-663000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr: exit status 7 (159.689538ms)

                                                
                                                
-- stdout --
	multinode-663000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-663000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0331 10:49:20.043154    9839 out.go:296] Setting OutFile to fd 1 ...
	I0331 10:49:20.043322    9839 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:49:20.043328    9839 out.go:309] Setting ErrFile to fd 2...
	I0331 10:49:20.043332    9839 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0331 10:49:20.043444    9839 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16144-2324/.minikube/bin
	I0331 10:49:20.043634    9839 out.go:303] Setting JSON to false
	I0331 10:49:20.043673    9839 mustload.go:65] Loading cluster: multinode-663000
	I0331 10:49:20.043723    9839 notify.go:220] Checking for updates...
	I0331 10:49:20.043981    9839 config.go:182] Loaded profile config "multinode-663000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0331 10:49:20.043994    9839 status.go:255] checking status of multinode-663000 ...
	I0331 10:49:20.044369    9839 cli_runner.go:164] Run: docker container inspect multinode-663000 --format={{.State.Status}}
	I0331 10:49:20.102010    9839 status.go:330] multinode-663000 host status = "Stopped" (err=<nil>)
	I0331 10:49:20.102027    9839 status.go:343] host is not running, skipping remaining checks
	I0331 10:49:20.102032    9839 status.go:257] multinode-663000 status: &{Name:multinode-663000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0331 10:49:20.102055    9839 status.go:255] checking status of multinode-663000-m02 ...
	I0331 10:49:20.102320    9839 cli_runner.go:164] Run: docker container inspect multinode-663000-m02 --format={{.State.Status}}
	I0331 10:49:20.160969    9839 status.go:330] multinode-663000-m02 host status = "Stopped" (err=<nil>)
	I0331 10:49:20.161001    9839 status.go:343] host is not running, skipping remaining checks
	I0331 10:49:20.161010    9839 status.go:257] multinode-663000-m02 status: &{Name:multinode-663000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (71.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-663000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0331 10:49:30.569062    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-663000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m10.905185969s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-663000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (71.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-663000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-663000-m02 --driver=docker 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-663000-m02 --driver=docker : exit status 14 (388.540441ms)

                                                
                                                
-- stdout --
	* [multinode-663000-m02] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-663000-m02' is duplicated with machine name 'multinode-663000-m02' in profile 'multinode-663000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-663000-m03 --driver=docker 
E0331 10:50:53.618711    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-663000-m03 --driver=docker : (25.475595998s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-663000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-663000: exit status 80 (548.512298ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-663000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-663000-m03 already exists in multinode-663000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-663000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-663000-m03: (2.548962881s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.01s)

                                                
                                    
x
+
TestPreload (163.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-465000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-465000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m20.58594082s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-465000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-465000 -- docker pull gcr.io/k8s-minikube/busybox: (2.625951507s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-465000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-465000: (10.853887164s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-465000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
E0331 10:53:27.148177    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-465000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m6.668785138s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-465000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-465000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-465000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-465000: (2.72748144s)
--- PASS: TestPreload (163.88s)

                                                
                                    
x
+
TestScheduledStopUnix (99.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-113000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-113000 --memory=2048 --driver=docker : (24.852188507s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-113000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-113000 -n scheduled-stop-113000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-113000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-113000 --cancel-scheduled
E0331 10:54:30.571345    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-113000 -n scheduled-stop-113000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-113000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-113000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-113000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-113000: exit status 7 (109.461216ms)

                                                
                                                
-- stdout --
	scheduled-stop-113000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-113000 -n scheduled-stop-113000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-113000 -n scheduled-stop-113000: exit status 7 (102.435918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-113000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-113000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-113000: (2.342912186s)
--- PASS: TestScheduledStopUnix (99.06s)

                                                
                                    
x
+
TestSkaffold (63.67s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe333908233 version
skaffold_test.go:63: skaffold version: v2.3.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-790000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-790000 --memory=2600 --driver=docker : (24.711639988s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe333908233 run --minikube-profile skaffold-790000 --kube-context skaffold-790000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe333908233 run --minikube-profile skaffold-790000 --kube-context skaffold-790000 --status-check=true --port-forward=false --interactive=false: (19.441464715s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-76476c95ff-24cdt" [ad56ea8b-2c66-4e91-8044-16617470cac5] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011642248s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5b8bccdd6c-c4sn2" [232baed9-b22f-4462-8696-5c47baa458c1] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00682298s
helpers_test.go:175: Cleaning up "skaffold-790000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-790000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-790000: (2.881481352s)
--- PASS: TestSkaffold (63.67s)

                                                
                                    
x
+
TestInsufficientStorage (14.69s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-261000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-261000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.492499455s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"89e92d1f-a42e-4688-8eed-8188943cb875","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-261000] minikube v1.29.0 on Darwin 13.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ea6887c-0a23-4a96-b2de-b44853c50a9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16144"}}
	{"specversion":"1.0","id":"913aff4b-9dc4-4b3f-98d2-c883096be4f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig"}}
	{"specversion":"1.0","id":"5186dd47-0a9f-4964-b646-a6b19a0bd04d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"432bb798-52b9-41b0-822e-869ff34012ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5388938e-4785-4725-b550-a3436f4211d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube"}}
	{"specversion":"1.0","id":"5a15726b-0433-4a0b-b05f-5a950350b33d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"41da4ffe-f46e-4ccb-82b4-f71df19c7154","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0f134cf8-d9d1-4c27-8313-069467c3d784","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b600556d-48d4-4809-8b5e-ec0a3d7fbe48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"136adf75-7c28-4c7a-a61e-a7465862f0f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"2e099151-e390-42ea-92ba-2acb9fc94613","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-261000 in cluster insufficient-storage-261000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b60caa8-c03b-427b-b2d1-23e3a137bccf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"040cf489-107f-45a3-9350-e8f4531c03d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6b6cadf-3681-454b-bed0-49072e3277b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-261000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-261000 --output=json --layout=cluster: exit status 7 (392.555607ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-261000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-261000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 10:56:44.400377   11602 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-261000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-261000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-261000 --output=json --layout=cluster: exit status 7 (396.487157ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-261000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-261000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0331 10:56:44.797461   11612 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-261000" does not appear in /Users/jenkins/minikube-integration/16144-2324/kubeconfig
	E0331 10:56:44.806436   11612 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/insufficient-storage-261000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-261000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-261000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-261000: (2.404557047s)
--- PASS: TestInsufficientStorage (14.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (12.64s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=16144
- KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3713256158/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3713256158/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3713256158/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3713256158/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (12.64s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (17.36s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=16144
- KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2784273864/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2784273864/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2784273864/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2784273864/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (17.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-369000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-369000: (3.486590274s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.49s)

                                                
                                    
x
+
TestPause/serial/Start (42.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-701000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0331 11:03:27.134771    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-701000 --memory=2048 --install-addons=false --wait=all --driver=docker : (42.792632944s)
--- PASS: TestPause/serial/Start (42.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-701000 --alsologtostderr -v=1 --driver=docker 
E0331 11:04:03.447544    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:04:30.541948    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-701000 --alsologtostderr -v=1 --driver=docker : (40.321925446s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-701000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-701000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-701000 --output=json --layout=cluster: exit status 2 (420.332517ms)

                                                
                                                
-- stdout --
	{"Name":"pause-701000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-701000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-701000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-701000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.64s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-701000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-701000 --alsologtostderr -v=5: (2.637904967s)
--- PASS: TestPause/serial/DeletePaused (2.64s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-701000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-701000: exit status 1 (57.009202ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-701000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-291000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-291000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (428.327478ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-291000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16144
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16144-2324/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16144-2324/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-291000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-291000 --driver=docker : (25.416407777s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-291000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-291000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-291000 --no-kubernetes --driver=docker : (15.371078104s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-291000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-291000 status -o json: exit status 2 (403.630656ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-291000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-291000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-291000: (2.440892865s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-291000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-291000 --no-kubernetes --driver=docker : (7.445794099s)
--- PASS: TestNoKubernetes/serial/Start (7.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-291000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-291000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (387.21318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-291000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-291000: (1.628931039s)
--- PASS: TestNoKubernetes/serial/Stop (1.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-291000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-291000 --driver=docker : (5.365598135s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-291000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-291000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (389.618744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (45.534039703s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4clkg" [d15a119b-469e-4e02-9b0a-a47ebeafd453] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-4clkg" [d15a119b-469e-4e02-9b0a-a47ebeafd453] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.009921725s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (55.078242787s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m10.474939263s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nntsf" [8e40cc78-19c3-4de3-92b1-9f94dc8d8672] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019385138s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6fsnj" [e297ac81-fe41-46ce-9ef6-a40bb9f9110a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-6fsnj" [e297ac81-fe41-46ce-9ef6-a40bb9f9110a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.011535408s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (57.896966643s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-j8md6" [7ed415e1-c8f5-4123-9916-f56603670608] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016305863s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-z9d7s" [80ed167d-bbf2-4bf4-8db9-60e1726cfeb3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-z9d7s" [80ed167d-bbf2-4bf4-8db9-60e1726cfeb3] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.00697246s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-8gqkg" [de483b6b-7aa4-4a9a-9d9a-d5daad251f95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-8gqkg" [de483b6b-7aa4-4a9a-9d9a-d5daad251f95] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 16.012908856s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (42.790436592s)
--- PASS: TestNetworkPlugins/group/false/Start (42.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (44.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (44.18568321s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (44.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-v8nlh" [7ad3098a-2409-446e-a3b5-2f7f0c40a50f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-v8nlh" [7ad3098a-2409-446e-a3b5-2f7f0c40a50f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.009391899s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6ch47" [ca06f172-c7b8-4878-bdd7-1ca98db548a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-6ch47" [ca06f172-c7b8-4878-bdd7-1ca98db548a8] Running
E0331 11:11:19.578481    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.1270043s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (56.053145205s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0331 11:11:50.073408    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (43.386792414s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-fbdgh" [8c19a0d7-318c-4156-aee5-1184baeb100c] Running
E0331 11:12:10.554508    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.014810931s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-xj2bm" [064dbfad-834f-4fd5-a323-baca6f48478c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-xj2bm" [064dbfad-834f-4fd5-a323-baca6f48478c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.009142046s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dd549" [17b709da-e3da-4ed8-b78f-a64a5bf1753c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-dd549" [17b709da-e3da-4ed8-b78f-a64a5bf1753c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008386492s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0331 11:12:51.513860    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
E0331 11:13:00.461991    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:00.467497    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:00.477848    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:00.498357    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:00.538547    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:00.618644    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:00.778833    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:01.099245    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:01.741265    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:13:03.021599    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-346000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (52.840582977s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-346000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (17.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-346000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-sjrb4" [e05cd916-193c-4937-9707-baecb0c74df6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-sjrb4" [e05cd916-193c-4937-9707-baecb0c74df6] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 17.006416973s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (17.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-346000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-346000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-374000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.0-rc.0
E0331 11:14:29.197788    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
E0331 11:14:30.511895    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 11:14:41.858253    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:41.863456    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:41.873906    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:41.893981    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:41.934781    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:42.014927    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:42.176213    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:42.496370    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:43.137219    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:44.417514    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:46.977506    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:14:49.679102    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
E0331 11:14:52.097379    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:15:02.336973    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:15:22.816506    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:15:30.639297    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-374000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.0-rc.0: (1m6.696602331s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-374000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4ebf18d9-3b64-43e7-992a-6fd06f62ab71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0331 11:15:35.930454    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:35.936500    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:35.947769    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:35.969977    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:36.010656    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:36.091648    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:36.252471    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:36.572662    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:37.213035    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:38.495188    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:15:41.055288    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4ebf18d9-3b64-43e7-992a-6fd06f62ab71] Running
E0331 11:15:44.297730    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:15:46.176283    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 14.015591897s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-374000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-374000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-374000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-374000 --alsologtostderr -v=3
E0331 11:15:56.418099    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-374000 --alsologtostderr -v=3: (10.889700944s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-374000 -n no-preload-374000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-374000 -n no-preload-374000: exit status 7 (101.462336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-374000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (305.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-374000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.0-rc.0
E0331 11:16:03.775008    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:16:08.255242    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:08.260582    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:08.270924    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:08.293109    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:08.333384    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:08.413618    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:08.575859    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:08.897015    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:09.538551    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:10.823220    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:13.389670    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:16.911943    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:16:18.516707    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:19.580424    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:16:28.766934    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:29.602148    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
E0331 11:16:49.252759    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:16:52.587600    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
E0331 11:16:57.295648    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
E0331 11:16:57.890362    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:17:09.497239    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:09.503320    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:09.513416    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:09.533516    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:09.574287    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:09.654491    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:09.815538    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:10.136741    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:10.777227    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:12.057713    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:14.619189    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:19.739554    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:17:25.724717    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-374000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.0-rc.0: (5m4.680048739s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-374000 -n no-preload-374000
E0331 11:21:03.640682    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (305.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-221000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-221000 --alsologtostderr -v=3: (1.590086607s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-221000 -n old-k8s-version-221000: exit status 7 (108.552868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-221000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-vnl72" [1162612c-068b-4aee-8b46-7b2dfb76eadb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0331 11:21:08.273517    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-vnl72" [1162612c-068b-4aee-8b46-7b2dfb76eadb] Running
E0331 11:21:19.582251    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.018203274s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-vnl72" [1162612c-068b-4aee-8b46-7b2dfb76eadb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010515638s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-374000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-374000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-374000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-374000 -n no-preload-374000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-374000 -n no-preload-374000: exit status 2 (417.844395ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-374000 -n no-preload-374000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-374000 -n no-preload-374000: exit status 2 (459.009332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-374000 --alsologtostderr -v=1
E0331 11:21:28.769208    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-374000 -n no-preload-374000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-374000 -n no-preload-374000
E0331 11:21:29.594898    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-877000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.3
E0331 11:21:35.963730    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:22:09.482841    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-877000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.3: (42.629459669s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-877000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4ad46471-5f93-467f-824b-1c104c9796a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4ad46471-5f93-467f-824b-1c104c9796a6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.047289382s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-877000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-877000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-877000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-877000 --alsologtostderr -v=3
E0331 11:22:29.782315    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:22:37.169736    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-877000 --alsologtostderr -v=3: (11.001435524s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-877000 -n embed-certs-877000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-877000 -n embed-certs-877000: exit status 7 (102.735641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-877000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (557.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-877000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.3
E0331 11:22:57.469408    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
E0331 11:23:00.464630    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kindnet-346000/client.crt: no such file or directory
E0331 11:23:27.107762    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 11:23:44.915326    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:24:08.719012    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
E0331 11:24:12.602383    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
E0331 11:24:13.573069    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 11:24:30.515232    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/functional-281000/client.crt: no such file or directory
E0331 11:24:41.861787    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
E0331 11:25:32.488907    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:32.494700    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:32.504815    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:32.526888    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:32.567968    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:32.649103    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:32.810887    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:33.131864    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:33.772213    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:35.053149    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:35.934102    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/false-346000/client.crt: no such file or directory
E0331 11:25:37.613863    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:42.735843    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:25:52.976868    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:26:08.258572    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/enable-default-cni-346000/client.crt: no such file or directory
E0331 11:26:13.457027    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:26:19.566861    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/skaffold-790000/client.crt: no such file or directory
E0331 11:26:29.580072    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/auto-346000/client.crt: no such file or directory
E0331 11:26:54.415983    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/no-preload-374000/client.crt: no such file or directory
E0331 11:27:09.467805    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
E0331 11:27:29.767031    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/bridge-346000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-877000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.3: (9m17.421095136s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-877000 -n embed-certs-877000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (557.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-ctzhv" [a68859fb-b149-4216-9beb-f94b0348eccb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013060797s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-ctzhv" [a68859fb-b149-4216-9beb-f94b0348eccb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007051217s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-877000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-877000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-877000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-877000 -n embed-certs-877000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-877000 -n embed-certs-877000: exit status 2 (426.644825ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-877000 -n embed-certs-877000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-877000 -n embed-certs-877000: exit status 2 (420.86083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-877000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-877000 -n embed-certs-877000
E0331 11:32:09.506707    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-877000 -n embed-certs-877000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-594000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-594000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.3: (50.621450306s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-594000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c2c32b8a-2cf1-4f2f-8cec-61169336f8d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c2c32b8a-2cf1-4f2f-8cec-61169336f8d1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.015670658s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-594000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-594000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-594000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-594000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-594000 --alsologtostderr -v=3: (10.908866936s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000: exit status 7 (103.136626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-594000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (309.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-594000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.3
E0331 11:33:27.130904    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/addons-841000/client.crt: no such file or directory
E0331 11:33:32.550833    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/flannel-346000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-594000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.3: (5m8.914105396s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (309.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-mxr4h" [18484801-0e24-4793-a6d3-a0548c5fa177] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-mxr4h" [18484801-0e24-4793-a6d3-a0548c5fa177] Running
E0331 11:38:44.926275    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/kubenet-346000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.012355302s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-mxr4h" [18484801-0e24-4793-a6d3-a0548c5fa177] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007065574s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-594000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-594000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-594000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000: exit status 2 (421.319509ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000: exit status 2 (419.079431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-594000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-594000 -n default-k8s-diff-port-594000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-822000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.0-rc.0
E0331 11:39:08.730656    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/calico-346000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-822000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.0-rc.0: (38.073948508s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-822000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-822000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.002277509s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-822000 --alsologtostderr -v=3
E0331 11:39:41.874676    2800 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16144-2324/.minikube/profiles/custom-flannel-346000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-822000 --alsologtostderr -v=3: (11.073012525s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-822000 -n newest-cni-822000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-822000 -n newest-cni-822000: exit status 7 (103.78198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-822000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-822000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-822000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.0-rc.0: (24.347396752s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-822000 -n newest-cni-822000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-822000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-822000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-822000 -n newest-cni-822000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-822000 -n newest-cni-822000: exit status 2 (415.927688ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-822000 -n newest-cni-822000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-822000 -n newest-cni-822000: exit status 2 (420.723437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-822000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-822000 -n newest-cni-822000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-822000 -n newest-cni-822000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.14s)

                                                
                                    

Test skip (20/318)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:305: registry stabilized in 10.346016ms
addons_test.go:307: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-dqpwn" [8ce9baa0-80c7-469c-863e-f8d24e15e631] Running
addons_test.go:307: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00788684s
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kc5d7" [e84dff14-a68c-469b-a757-044671a77171] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008835139s
addons_test.go:315: (dbg) Run:  kubectl --context addons-841000 delete po -l run=registry-test --now
addons_test.go:320: (dbg) Run:  kubectl --context addons-841000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:320: (dbg) Done: kubectl --context addons-841000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.746129664s)
addons_test.go:330: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:182: (dbg) Run:  kubectl --context addons-841000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Run:  kubectl --context addons-841000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:220: (dbg) Run:  kubectl --context addons-841000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:225: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9bee0faf-5250-4c96-9f06-4f39ccdab9a5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9bee0faf-5250-4c96-9f06-4f39ccdab9a5] Running
addons_test.go:225: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.007533675s
addons_test.go:237: (dbg) Run:  out/minikube-darwin-amd64 -p addons-841000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:257: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.25s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-281000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-281000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-75df956f7d-p7jxx" [29c961fc-022e-43b0-8fe7-37e8fb766c66] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-75df956f7d-p7jxx" [29c961fc-022e-43b0-8fe7-37e8fb766c66] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.007470091s
functional_test.go:1644: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-346000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-346000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-346000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-346000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-346000"

                                                
                                                
----------------------- debugLogs end: cilium-346000 [took: 5.394564773s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-346000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-346000
--- SKIP: TestNetworkPlugins/group/cilium (5.90s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-563000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-563000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard