Test Report: Docker_macOS 16199

                    
                      f372fef9a5d1d206962183895d60b784517ffedc:2023-03-30:28570
                    
                

Test fail (15/318)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (276.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-106000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0330 08:48:56.164942   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:51:12.325808   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:51:18.008142   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.013570   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.025632   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.046144   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.086236   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.167097   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.328673   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:18.650837   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:19.291817   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:20.572711   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:23.135009   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:28.257436   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:38.498406   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:51:40.011932   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:51:58.980676   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:52:39.942374   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-106000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m36.140497501s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-106000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-106000 in cluster ingress-addon-legacy-106000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 08:48:39.786693   28466 out.go:296] Setting OutFile to fd 1 ...
	I0330 08:48:39.786872   28466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:48:39.786878   28466 out.go:309] Setting ErrFile to fd 2...
	I0330 08:48:39.786882   28466 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:48:39.787005   28466 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 08:48:39.788481   28466 out.go:303] Setting JSON to false
	I0330 08:48:39.808583   28466 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6487,"bootTime":1680184832,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 08:48:39.808678   28466 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 08:48:39.830790   28466 out.go:177] * [ingress-addon-legacy-106000] minikube v1.29.0 on Darwin 13.3
	I0330 08:48:39.872916   28466 notify.go:220] Checking for updates...
	I0330 08:48:39.894650   28466 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 08:48:39.915734   28466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 08:48:39.936784   28466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 08:48:39.957825   28466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 08:48:39.978723   28466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 08:48:39.999766   28466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 08:48:40.020934   28466 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 08:48:40.086348   28466 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 08:48:40.086469   28466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:48:40.274261   28466 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:48:40.138873327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:48:40.296019   28466 out.go:177] * Using the docker driver based on user configuration
	I0330 08:48:40.317922   28466 start.go:295] selected driver: docker
	I0330 08:48:40.317942   28466 start.go:859] validating driver "docker" against <nil>
	I0330 08:48:40.317956   28466 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 08:48:40.322130   28466 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:48:40.507740   28466 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:46 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:48:40.374493638 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:48:40.507865   28466 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0330 08:48:40.508040   28466 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0330 08:48:40.529793   28466 out.go:177] * Using Docker Desktop driver with root privileges
	I0330 08:48:40.551499   28466 cni.go:84] Creating CNI manager for ""
	I0330 08:48:40.551537   28466 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 08:48:40.551562   28466 start_flags.go:319] config:
	{Name:ingress-addon-legacy-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-106000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 08:48:40.594587   28466 out.go:177] * Starting control plane node ingress-addon-legacy-106000 in cluster ingress-addon-legacy-106000
	I0330 08:48:40.616490   28466 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 08:48:40.637507   28466 out.go:177] * Pulling base image ...
	I0330 08:48:40.679410   28466 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0330 08:48:40.679458   28466 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 08:48:40.743679   28466 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 08:48:40.743700   28466 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 08:48:40.780580   28466 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0330 08:48:40.780606   28466 cache.go:57] Caching tarball of preloaded images
	I0330 08:48:40.781013   28466 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0330 08:48:40.802615   28466 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0330 08:48:40.844438   28466 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:48:41.044896   28466 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0330 08:49:04.629866   28466 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:49:04.630051   28466 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:49:05.247918   28466 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0330 08:49:05.248274   28466 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/config.json ...
	I0330 08:49:05.248301   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/config.json: {Name:mkf52b5e721448e731f6e88518122ed38f5b2097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:49:05.248634   28466 cache.go:193] Successfully downloaded all kic artifacts
	I0330 08:49:05.248659   28466 start.go:364] acquiring machines lock for ingress-addon-legacy-106000: {Name:mka03da6851c44848a95a9e100f1a914957cd2eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 08:49:05.248832   28466 start.go:368] acquired machines lock for "ingress-addon-legacy-106000" in 152.226µs
	I0330 08:49:05.248881   28466 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-106000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 08:49:05.248948   28466 start.go:125] createHost starting for "" (driver="docker")
	I0330 08:49:05.270314   28466 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0330 08:49:05.270664   28466 start.go:159] libmachine.API.Create for "ingress-addon-legacy-106000" (driver="docker")
	I0330 08:49:05.270718   28466 client.go:168] LocalClient.Create starting
	I0330 08:49:05.270912   28466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem
	I0330 08:49:05.270987   28466 main.go:141] libmachine: Decoding PEM data...
	I0330 08:49:05.271018   28466 main.go:141] libmachine: Parsing certificate...
	I0330 08:49:05.271146   28466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem
	I0330 08:49:05.271197   28466 main.go:141] libmachine: Decoding PEM data...
	I0330 08:49:05.271212   28466 main.go:141] libmachine: Parsing certificate...
	I0330 08:49:05.292489   28466 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-106000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0330 08:49:05.354332   28466 cli_runner.go:211] docker network inspect ingress-addon-legacy-106000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0330 08:49:05.354466   28466 network_create.go:281] running [docker network inspect ingress-addon-legacy-106000] to gather additional debugging logs...
	I0330 08:49:05.354484   28466 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-106000
	W0330 08:49:05.411956   28466 cli_runner.go:211] docker network inspect ingress-addon-legacy-106000 returned with exit code 1
	I0330 08:49:05.411983   28466 network_create.go:284] error running [docker network inspect ingress-addon-legacy-106000]: docker network inspect ingress-addon-legacy-106000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-106000
	I0330 08:49:05.412004   28466 network_create.go:286] output of [docker network inspect ingress-addon-legacy-106000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-106000
	
	** /stderr **
	I0330 08:49:05.412090   28466 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0330 08:49:05.469923   28466 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0007dca70}
	I0330 08:49:05.469958   28466 network_create.go:123] attempt to create docker network ingress-addon-legacy-106000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0330 08:49:05.470039   28466 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-106000 ingress-addon-legacy-106000
	I0330 08:49:05.560387   28466 network_create.go:107] docker network ingress-addon-legacy-106000 192.168.49.0/24 created
	I0330 08:49:05.560420   28466 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-106000" container
	I0330 08:49:05.560545   28466 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0330 08:49:05.618687   28466 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-106000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-106000 --label created_by.minikube.sigs.k8s.io=true
	I0330 08:49:05.678238   28466 oci.go:103] Successfully created a docker volume ingress-addon-legacy-106000
	I0330 08:49:05.678375   28466 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-106000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-106000 --entrypoint /usr/bin/test -v ingress-addon-legacy-106000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0330 08:49:06.149527   28466 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-106000
	I0330 08:49:06.149558   28466 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0330 08:49:06.149573   28466 kic.go:190] Starting extracting preloaded images to volume ...
	I0330 08:49:06.149706   28466 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-106000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0330 08:49:12.426694   28466 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-106000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (6.27671951s)
	I0330 08:49:12.426720   28466 kic.go:199] duration metric: took 6.276966 seconds to extract preloaded images to volume
	I0330 08:49:12.426836   28466 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0330 08:49:12.613296   28466 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-106000 --name ingress-addon-legacy-106000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-106000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-106000 --network ingress-addon-legacy-106000 --ip 192.168.49.2 --volume ingress-addon-legacy-106000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0330 08:49:12.980178   28466 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Running}}
	I0330 08:49:13.045172   28466 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Status}}
	I0330 08:49:13.109790   28466 cli_runner.go:164] Run: docker exec ingress-addon-legacy-106000 stat /var/lib/dpkg/alternatives/iptables
	I0330 08:49:13.231810   28466 oci.go:144] the created container "ingress-addon-legacy-106000" has a running status.
	I0330 08:49:13.231852   28466 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa...
	I0330 08:49:13.439580   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0330 08:49:13.439656   28466 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0330 08:49:13.544776   28466 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Status}}
	I0330 08:49:13.606523   28466 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0330 08:49:13.606544   28466 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-106000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0330 08:49:13.718426   28466 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Status}}
	I0330 08:49:13.778072   28466 machine.go:88] provisioning docker machine ...
	I0330 08:49:13.778112   28466 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-106000"
	I0330 08:49:13.778222   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:13.838678   28466 main.go:141] libmachine: Using SSH client type: native
	I0330 08:49:13.839055   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56053 <nil> <nil>}
	I0330 08:49:13.839071   28466 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-106000 && echo "ingress-addon-legacy-106000" | sudo tee /etc/hostname
	I0330 08:49:13.966708   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-106000
	
	I0330 08:49:13.966792   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:14.027896   28466 main.go:141] libmachine: Using SSH client type: native
	I0330 08:49:14.028242   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56053 <nil> <nil>}
	I0330 08:49:14.028263   28466 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-106000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-106000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-106000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 08:49:14.146671   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 08:49:14.146699   28466 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 08:49:14.146717   28466 ubuntu.go:177] setting up certificates
	I0330 08:49:14.146725   28466 provision.go:83] configureAuth start
	I0330 08:49:14.146805   28466 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-106000
	I0330 08:49:14.207343   28466 provision.go:138] copyHostCerts
	I0330 08:49:14.207387   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 08:49:14.207449   28466 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 08:49:14.207457   28466 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 08:49:14.207578   28466 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 08:49:14.207767   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 08:49:14.207805   28466 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 08:49:14.207810   28466 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 08:49:14.207871   28466 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 08:49:14.207988   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 08:49:14.208028   28466 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 08:49:14.208033   28466 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 08:49:14.208088   28466 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 08:49:14.208199   28466 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-106000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-106000]
	I0330 08:49:14.260169   28466 provision.go:172] copyRemoteCerts
	I0330 08:49:14.260234   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 08:49:14.260284   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:14.320707   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
	I0330 08:49:14.407743   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0330 08:49:14.407821   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 08:49:14.425011   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0330 08:49:14.425082   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0330 08:49:14.442276   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0330 08:49:14.442339   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0330 08:49:14.460460   28466 provision.go:86] duration metric: configureAuth took 313.71574ms
	I0330 08:49:14.460474   28466 ubuntu.go:193] setting minikube options for container-runtime
	I0330 08:49:14.460628   28466 config.go:182] Loaded profile config "ingress-addon-legacy-106000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0330 08:49:14.460695   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:14.521207   28466 main.go:141] libmachine: Using SSH client type: native
	I0330 08:49:14.521561   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56053 <nil> <nil>}
	I0330 08:49:14.521578   28466 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 08:49:14.637715   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 08:49:14.637729   28466 ubuntu.go:71] root file system type: overlay
	I0330 08:49:14.637833   28466 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 08:49:14.637920   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:14.697377   28466 main.go:141] libmachine: Using SSH client type: native
	I0330 08:49:14.697717   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56053 <nil> <nil>}
	I0330 08:49:14.697770   28466 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 08:49:14.825084   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 08:49:14.825202   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:14.885148   28466 main.go:141] libmachine: Using SSH client type: native
	I0330 08:49:14.885501   28466 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56053 <nil> <nil>}
	I0330 08:49:14.885516   28466 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 08:49:15.490390   28466 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 15:49:14.822905858 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0330 08:49:15.490422   28466 machine.go:91] provisioned docker machine in 1.712275551s
	I0330 08:49:15.490432   28466 client.go:171] LocalClient.Create took 10.219411101s
	I0330 08:49:15.490452   28466 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-106000" took 10.219494346s
	I0330 08:49:15.490462   28466 start.go:300] post-start starting for "ingress-addon-legacy-106000" (driver="docker")
	I0330 08:49:15.490468   28466 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 08:49:15.490547   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 08:49:15.490611   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:15.554184   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
	I0330 08:49:15.642991   28466 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 08:49:15.646692   28466 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 08:49:15.646715   28466 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 08:49:15.646725   28466 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 08:49:15.646729   28466 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 08:49:15.646738   28466 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 08:49:15.646835   28466 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 08:49:15.646999   28466 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 08:49:15.647006   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> /etc/ssl/certs/254482.pem
	I0330 08:49:15.647200   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 08:49:15.654630   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 08:49:15.672099   28466 start.go:303] post-start completed in 181.621912ms
	I0330 08:49:15.672687   28466 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-106000
	I0330 08:49:15.738379   28466 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/config.json ...
	I0330 08:49:15.738826   28466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 08:49:15.738896   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:15.798923   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
	I0330 08:49:15.882436   28466 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 08:49:15.887204   28466 start.go:128] duration metric: createHost completed in 10.637938553s
	I0330 08:49:15.887223   28466 start.go:83] releasing machines lock for "ingress-addon-legacy-106000", held for 10.638076888s
	I0330 08:49:15.887328   28466 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-106000
	I0330 08:49:15.947076   28466 ssh_runner.go:195] Run: cat /version.json
	I0330 08:49:15.947128   28466 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0330 08:49:15.947144   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:15.947202   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:16.014459   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
	I0330 08:49:16.016117   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
	I0330 08:49:16.360511   28466 ssh_runner.go:195] Run: systemctl --version
	I0330 08:49:16.365485   28466 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0330 08:49:16.370624   28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0330 08:49:16.391231   28466 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0330 08:49:16.391306   28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0330 08:49:16.405223   28466 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0330 08:49:16.412829   28466 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0330 08:49:16.412847   28466 start.go:481] detecting cgroup driver to use...
	I0330 08:49:16.412859   28466 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 08:49:16.412937   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 08:49:16.426441   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0330 08:49:16.434973   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 08:49:16.443319   28466 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 08:49:16.443374   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 08:49:16.451912   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 08:49:16.460432   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 08:49:16.468990   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 08:49:16.477375   28466 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 08:49:16.485281   28466 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 08:49:16.493870   28466 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 08:49:16.500972   28466 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 08:49:16.508081   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 08:49:16.569484   28466 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 08:49:16.644180   28466 start.go:481] detecting cgroup driver to use...
	I0330 08:49:16.644207   28466 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 08:49:16.644275   28466 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 08:49:16.654914   28466 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 08:49:16.654982   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 08:49:16.665258   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 08:49:16.680176   28466 ssh_runner.go:195] Run: which cri-dockerd
	I0330 08:49:16.684407   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 08:49:16.693161   28466 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (184 bytes)
	I0330 08:49:16.707940   28466 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 08:49:16.799529   28466 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 08:49:16.891166   28466 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 08:49:16.891185   28466 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 08:49:16.904686   28466 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 08:49:16.996298   28466 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 08:49:17.214928   28466 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 08:49:17.241741   28466 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 08:49:17.314286   28466 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	I0330 08:49:17.314500   28466 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-106000 dig +short host.docker.internal
	I0330 08:49:17.433383   28466 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 08:49:17.433501   28466 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 08:49:17.437904   28466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 08:49:17.448144   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:49:17.510182   28466 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0330 08:49:17.510279   28466 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 08:49:17.530353   28466 docker.go:639] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0330 08:49:17.530373   28466 docker.go:569] Images already preloaded, skipping extraction
	I0330 08:49:17.530466   28466 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 08:49:17.551435   28466 docker.go:639] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0330 08:49:17.551452   28466 cache_images.go:84] Images are preloaded, skipping loading
	I0330 08:49:17.551529   28466 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 08:49:17.579226   28466 cni.go:84] Creating CNI manager for ""
	I0330 08:49:17.579248   28466 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 08:49:17.579266   28466 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0330 08:49:17.579289   28466 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-106000 NodeName:ingress-addon-legacy-106000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 08:49:17.579400   28466 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-106000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 08:49:17.579472   28466 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-106000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-106000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0330 08:49:17.579544   28466 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0330 08:49:17.587521   28466 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 08:49:17.587590   28466 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 08:49:17.595146   28466 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0330 08:49:17.608223   28466 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0330 08:49:17.621105   28466 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0330 08:49:17.634563   28466 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0330 08:49:17.638389   28466 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 08:49:17.648427   28466 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000 for IP: 192.168.49.2
	I0330 08:49:17.648445   28466 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:49:17.648615   28466 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 08:49:17.648692   28466 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 08:49:17.648733   28466 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.key
	I0330 08:49:17.648746   28466 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.crt with IP's: []
	I0330 08:49:17.714390   28466 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.crt ...
	I0330 08:49:17.714400   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.crt: {Name:mk06cae9dd57d2f59864f4f73d73ab5c187b7451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:49:17.714705   28466 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.key ...
	I0330 08:49:17.714714   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/client.key: {Name:mk763d7d2d2054d57c17008ee420bc5c87b1e530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:49:17.714918   28466 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key.dd3b5fb2
	I0330 08:49:17.714934   28466 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0330 08:49:18.017594   28466 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt.dd3b5fb2 ...
	I0330 08:49:18.017605   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt.dd3b5fb2: {Name:mkaad4b5c2d1334472c6b9d39cd0c7762374ff65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:49:18.017907   28466 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key.dd3b5fb2 ...
	I0330 08:49:18.017920   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key.dd3b5fb2: {Name:mk681b41f39aff6e4c66737c50b1786379bffbc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:49:18.018188   28466 certs.go:333] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt
	I0330 08:49:18.018426   28466 certs.go:337] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key
	I0330 08:49:18.018629   28466 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key
	I0330 08:49:18.018643   28466 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt with IP's: []
	I0330 08:49:18.244020   28466 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt ...
	I0330 08:49:18.244033   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt: {Name:mk2a66ffa942d3d99e7bcc78026c98562ae3512d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:49:18.244317   28466 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key ...
	I0330 08:49:18.244328   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key: {Name:mk2f56053e25ffd6ac1b7edebdc05af692d48cec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:49:18.244554   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0330 08:49:18.244582   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0330 08:49:18.244601   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0330 08:49:18.244675   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0330 08:49:18.244726   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0330 08:49:18.244746   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0330 08:49:18.244762   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0330 08:49:18.244779   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0330 08:49:18.244901   28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 08:49:18.244949   28466 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 08:49:18.244963   28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 08:49:18.244993   28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 08:49:18.245021   28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 08:49:18.245056   28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 08:49:18.245128   28466 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 08:49:18.245166   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> /usr/share/ca-certificates/254482.pem
	I0330 08:49:18.245186   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0330 08:49:18.245207   28466 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem -> /usr/share/ca-certificates/25448.pem
	I0330 08:49:18.245692   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 08:49:18.264194   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0330 08:49:18.281662   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 08:49:18.298835   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/ingress-addon-legacy-106000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0330 08:49:18.316376   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 08:49:18.333599   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 08:49:18.351119   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 08:49:18.368426   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 08:49:18.385791   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 08:49:18.403713   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 08:49:18.421103   28466 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 08:49:18.438536   28466 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 08:49:18.451759   28466 ssh_runner.go:195] Run: openssl version
	I0330 08:49:18.457377   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 08:49:18.465705   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 08:49:18.469808   28466 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 08:49:18.469857   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 08:49:18.475645   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 08:49:18.483787   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 08:49:18.491788   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 08:49:18.495936   28466 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 08:49:18.495988   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 08:49:18.501472   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 08:49:18.509911   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 08:49:18.518139   28466 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 08:49:18.522121   28466 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 08:49:18.522200   28466 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 08:49:18.527515   28466 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 08:49:18.535618   28466 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-106000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 08:49:18.535733   28466 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 08:49:18.555057   28466 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 08:49:18.563233   28466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 08:49:18.570783   28466 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 08:49:18.570830   28466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 08:49:18.578323   28466 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 08:49:18.578361   28466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 08:49:18.627344   28466 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0330 08:49:18.627389   28466 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 08:49:18.797926   28466 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 08:49:18.798010   28466 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 08:49:18.798087   28466 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 08:49:18.951704   28466 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 08:49:18.952178   28466 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 08:49:18.952218   28466 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0330 08:49:19.026632   28466 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 08:49:19.067749   28466 out.go:204]   - Generating certificates and keys ...
	I0330 08:49:19.067834   28466 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 08:49:19.067939   28466 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 08:49:19.150687   28466 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0330 08:49:19.363611   28466 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0330 08:49:19.642874   28466 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0330 08:49:19.752741   28466 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0330 08:49:19.846785   28466 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0330 08:49:19.846901   28466 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0330 08:49:19.949666   28466 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0330 08:49:19.949774   28466 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0330 08:49:20.446662   28466 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0330 08:49:20.477859   28466 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0330 08:49:20.674987   28466 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0330 08:49:20.675076   28466 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 08:49:21.040673   28466 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 08:49:21.235530   28466 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 08:49:21.383755   28466 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 08:49:21.699969   28466 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 08:49:21.700526   28466 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 08:49:21.722083   28466 out.go:204]   - Booting up control plane ...
	I0330 08:49:21.722207   28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 08:49:21.722272   28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 08:49:21.722332   28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 08:49:21.722393   28466 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 08:49:21.722523   28466 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 08:50:01.710950   28466 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 08:50:01.712009   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:50:01.712230   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:50:06.713412   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:50:06.713614   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:50:16.715741   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:50:16.715969   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:50:36.718311   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:50:36.718515   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:51:16.721574   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:51:16.721931   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:51:16.721956   28466 kubeadm.go:322] 
	I0330 08:51:16.722008   28466 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0330 08:51:16.722056   28466 kubeadm.go:322] 		timed out waiting for the condition
	I0330 08:51:16.722066   28466 kubeadm.go:322] 
	I0330 08:51:16.722113   28466 kubeadm.go:322] 	This error is likely caused by:
	I0330 08:51:16.722188   28466 kubeadm.go:322] 		- The kubelet is not running
	I0330 08:51:16.722335   28466 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 08:51:16.722356   28466 kubeadm.go:322] 
	I0330 08:51:16.722476   28466 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 08:51:16.722517   28466 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0330 08:51:16.722561   28466 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0330 08:51:16.722567   28466 kubeadm.go:322] 
	I0330 08:51:16.722709   28466 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 08:51:16.722831   28466 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0330 08:51:16.722854   28466 kubeadm.go:322] 
	I0330 08:51:16.722951   28466 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0330 08:51:16.723016   28466 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0330 08:51:16.723113   28466 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0330 08:51:16.723146   28466 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0330 08:51:16.723151   28466 kubeadm.go:322] 
	I0330 08:51:16.726658   28466 kubeadm.go:322] W0330 15:49:18.626128    1166 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0330 08:51:16.726829   28466 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 08:51:16.726902   28466 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 08:51:16.727014   28466 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0330 08:51:16.727103   28466 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 08:51:16.727191   28466 kubeadm.go:322] W0330 15:49:21.704366    1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0330 08:51:16.727303   28466 kubeadm.go:322] W0330 15:49:21.705180    1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0330 08:51:16.727364   28466 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 08:51:16.727428   28466 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0330 08:51:16.727721   28466 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0330 15:49:18.626128    1166 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0330 15:49:21.704366    1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0330 15:49:21.705180    1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-106000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0330 15:49:18.626128    1166 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0330 15:49:21.704366    1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0330 15:49:21.705180    1166 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0330 08:51:16.727765   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0330 08:51:17.142218   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 08:51:17.152019   28466 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 08:51:17.152076   28466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 08:51:17.159646   28466 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 08:51:17.159667   28466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 08:51:17.208398   28466 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0330 08:51:17.208449   28466 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 08:51:17.376078   28466 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 08:51:17.376200   28466 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 08:51:17.376275   28466 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 08:51:17.531937   28466 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 08:51:17.532396   28466 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 08:51:17.532671   28466 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0330 08:51:17.610544   28466 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 08:51:17.632150   28466 out.go:204]   - Generating certificates and keys ...
	I0330 08:51:17.632224   28466 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 08:51:17.632321   28466 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 08:51:17.632415   28466 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 08:51:17.632482   28466 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 08:51:17.632551   28466 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 08:51:17.632624   28466 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 08:51:17.632688   28466 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 08:51:17.632742   28466 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 08:51:17.632828   28466 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 08:51:17.632903   28466 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 08:51:17.632940   28466 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 08:51:17.632987   28466 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 08:51:17.821306   28466 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 08:51:17.912641   28466 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 08:51:18.032469   28466 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 08:51:18.336469   28466 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 08:51:18.337022   28466 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 08:51:18.358638   28466 out.go:204]   - Booting up control plane ...
	I0330 08:51:18.358817   28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 08:51:18.359019   28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 08:51:18.359159   28466 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 08:51:18.359374   28466 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 08:51:18.359675   28466 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 08:51:58.347328   28466 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 08:51:58.348328   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:51:58.348551   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:52:03.349774   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:52:03.349947   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:52:13.352032   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:52:13.352274   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:52:33.354600   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:52:33.354825   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:53:13.357584   28466 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 08:53:13.357815   28466 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 08:53:13.357827   28466 kubeadm.go:322] 
	I0330 08:53:13.357902   28466 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0330 08:53:13.357960   28466 kubeadm.go:322] 		timed out waiting for the condition
	I0330 08:53:13.357969   28466 kubeadm.go:322] 
	I0330 08:53:13.358024   28466 kubeadm.go:322] 	This error is likely caused by:
	I0330 08:53:13.358073   28466 kubeadm.go:322] 		- The kubelet is not running
	I0330 08:53:13.358194   28466 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 08:53:13.358209   28466 kubeadm.go:322] 
	I0330 08:53:13.358324   28466 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 08:53:13.358380   28466 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0330 08:53:13.358414   28466 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0330 08:53:13.358419   28466 kubeadm.go:322] 
	I0330 08:53:13.358541   28466 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 08:53:13.358630   28466 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0330 08:53:13.358638   28466 kubeadm.go:322] 
	I0330 08:53:13.358747   28466 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0330 08:53:13.358803   28466 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0330 08:53:13.358884   28466 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0330 08:53:13.358936   28466 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0330 08:53:13.358943   28466 kubeadm.go:322] 
	I0330 08:53:13.361707   28466 kubeadm.go:322] W0330 15:51:17.207080    3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0330 08:53:13.361861   28466 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 08:53:13.361936   28466 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 08:53:13.362056   28466 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0330 08:53:13.362151   28466 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 08:53:13.362255   28466 kubeadm.go:322] W0330 15:51:18.340722    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0330 08:53:13.362359   28466 kubeadm.go:322] W0330 15:51:18.341529    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0330 08:53:13.362433   28466 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 08:53:13.362491   28466 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0330 08:53:13.362535   28466 kubeadm.go:403] StartCluster complete in 3m54.820130526s
	I0330 08:53:13.362641   28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 08:53:13.381700   28466 logs.go:277] 0 containers: []
	W0330 08:53:13.381713   28466 logs.go:279] No container was found matching "kube-apiserver"
	I0330 08:53:13.381790   28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 08:53:13.401224   28466 logs.go:277] 0 containers: []
	W0330 08:53:13.401236   28466 logs.go:279] No container was found matching "etcd"
	I0330 08:53:13.401303   28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 08:53:13.421383   28466 logs.go:277] 0 containers: []
	W0330 08:53:13.421397   28466 logs.go:279] No container was found matching "coredns"
	I0330 08:53:13.421467   28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 08:53:13.441098   28466 logs.go:277] 0 containers: []
	W0330 08:53:13.441111   28466 logs.go:279] No container was found matching "kube-scheduler"
	I0330 08:53:13.441189   28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 08:53:13.460470   28466 logs.go:277] 0 containers: []
	W0330 08:53:13.460484   28466 logs.go:279] No container was found matching "kube-proxy"
	I0330 08:53:13.460550   28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 08:53:13.479919   28466 logs.go:277] 0 containers: []
	W0330 08:53:13.479932   28466 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 08:53:13.480000   28466 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 08:53:13.499065   28466 logs.go:277] 0 containers: []
	W0330 08:53:13.499080   28466 logs.go:279] No container was found matching "kindnet"
	I0330 08:53:13.499087   28466 logs.go:123] Gathering logs for kubelet ...
	I0330 08:53:13.499102   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 08:53:13.537005   28466 logs.go:123] Gathering logs for dmesg ...
	I0330 08:53:13.537018   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 08:53:13.550268   28466 logs.go:123] Gathering logs for describe nodes ...
	I0330 08:53:13.550281   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 08:53:13.605935   28466 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 08:53:13.605948   28466 logs.go:123] Gathering logs for Docker ...
	I0330 08:53:13.605959   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 08:53:13.630203   28466 logs.go:123] Gathering logs for container status ...
	I0330 08:53:13.630221   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 08:53:15.681227   28466 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050934887s)
	W0330 08:53:15.681353   28466 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0330 15:51:17.207080    3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0330 15:51:18.340722    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0330 15:51:18.341529    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0330 08:53:15.681371   28466 out.go:239] * 
	* 
	W0330 08:53:15.681499   28466 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0330 15:51:17.207080    3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0330 15:51:18.340722    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0330 15:51:18.341529    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0330 15:51:17.207080    3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0330 15:51:18.340722    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0330 15:51:18.341529    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 08:53:15.681515   28466 out.go:239] * 
	* 
	W0330 08:53:15.682163   28466 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0330 08:53:15.745521   28466 out.go:177] 
	W0330 08:53:15.788074   28466 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0330 15:51:17.207080    3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0330 15:51:18.340722    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0330 15:51:18.341529    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0330 15:51:17.207080    3586 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0330 15:51:18.340722    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0330 15:51:18.341529    3586 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 08:53:15.788237   28466 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0330 08:53:15.788317   28466 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0330 08:53:15.809691   28466 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-106000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (276.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (91.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-106000 addons enable ingress --alsologtostderr -v=5
E0330 08:54:01.866308   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-106000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m31.349051499s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 08:53:15.974252   28807 out.go:296] Setting OutFile to fd 1 ...
	I0330 08:53:15.974443   28807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:53:15.974450   28807 out.go:309] Setting ErrFile to fd 2...
	I0330 08:53:15.974456   28807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:53:15.974575   28807 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 08:53:15.996540   28807 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0330 08:53:16.018444   28807 config.go:182] Loaded profile config "ingress-addon-legacy-106000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0330 08:53:16.018463   28807 addons.go:66] Setting ingress=true in profile "ingress-addon-legacy-106000"
	I0330 08:53:16.018472   28807 addons.go:228] Setting addon ingress=true in "ingress-addon-legacy-106000"
	I0330 08:53:16.018517   28807 host.go:66] Checking if "ingress-addon-legacy-106000" exists ...
	I0330 08:53:16.019026   28807 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Status}}
	I0330 08:53:16.100002   28807 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0330 08:53:16.121169   28807 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0330 08:53:16.142102   28807 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0330 08:53:16.163395   28807 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0330 08:53:16.184410   28807 addons.go:420] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0330 08:53:16.184434   28807 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0330 08:53:16.184537   28807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:53:16.246075   28807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
	I0330 08:53:16.340337   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:16.393196   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:16.393244   28807 retry.go:31] will retry after 311.716689ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:16.705447   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:16.758241   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:16.758264   28807 retry.go:31] will retry after 271.61714ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:17.031117   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:17.086040   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:17.086062   28807 retry.go:31] will retry after 366.92871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:17.453453   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:17.508327   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:17.508351   28807 retry.go:31] will retry after 1.066596448s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:18.576093   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:18.631783   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:18.631801   28807 retry.go:31] will retry after 1.076423136s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:19.708431   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:19.761877   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:19.761895   28807 retry.go:31] will retry after 1.659510619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:21.422503   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:21.477563   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:21.477580   28807 retry.go:31] will retry after 1.918764578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:23.396684   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:23.451218   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:23.451235   28807 retry.go:31] will retry after 6.019282023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:29.470996   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:29.525122   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:29.525139   28807 retry.go:31] will retry after 7.199791942s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:36.725718   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:36.781532   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:36.781550   28807 retry.go:31] will retry after 11.927256084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:48.711485   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:48.766128   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:48.766146   28807 retry.go:31] will retry after 9.347428848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:58.115106   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:53:58.169510   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:53:58.169532   28807 retry.go:31] will retry after 24.373372283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:22.544364   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:54:22.599711   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:22.599728   28807 retry.go:31] will retry after 24.50767731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:47.108963   28807 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0330 08:54:47.164846   28807 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:47.164877   28807 addons.go:464] Verifying addon ingress=true in "ingress-addon-legacy-106000"
	I0330 08:54:47.186666   28807 out.go:177] * Verifying ingress addon...
	I0330 08:54:47.209630   28807 out.go:177] 
	W0330 08:54:47.231669   28807 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-106000" does not exist: client config: context "ingress-addon-legacy-106000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-106000" does not exist: client config: context "ingress-addon-legacy-106000" does not exist]
	W0330 08:54:47.231701   28807 out.go:239] * 
	* 
	W0330 08:54:47.237801   28807 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0330 08:54:47.259393   28807 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-106000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-106000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313",
	        "Created": "2023-03-30T15:49:12.677721073Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 436108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T15:49:12.970353998Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/hosts",
	        "LogPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313-json.log",
	        "Name": "/ingress-addon-legacy-106000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-106000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-106000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-106000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-106000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-106000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-106000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-106000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c00cf41442a419728a5152b77292cf773e5fae33b5394a47c989dc9e4397dc57",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56055"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56052"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c00cf41442a4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-106000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2a6214b3d75e",
	                        "ingress-addon-legacy-106000"
	                    ],
	                    "NetworkID": "011c97de28b919532250d90a05061b13150128fbac9bbec992ca90ccec248c29",
	                    "EndpointID": "d382d09528fed748f1d266f4bdd77c9254ca0344d24270164adc89765afd537f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-106000 -n ingress-addon-legacy-106000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-106000 -n ingress-addon-legacy-106000: exit status 6 (393.874127ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 08:54:47.730911   28896 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-106000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-106000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (91.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (119.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-106000 addons enable ingress-dns --alsologtostderr -v=5
E0330 08:56:12.333857   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:56:18.018088   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 08:56:45.712395   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-106000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m58.847120339s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 08:54:47.783546   28906 out.go:296] Setting OutFile to fd 1 ...
	I0330 08:54:47.783716   28906 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:54:47.783721   28906 out.go:309] Setting ErrFile to fd 2...
	I0330 08:54:47.783725   28906 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:54:47.783842   28906 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 08:54:47.806363   28906 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0330 08:54:47.828554   28906 config.go:182] Loaded profile config "ingress-addon-legacy-106000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0330 08:54:47.828584   28906 addons.go:66] Setting ingress-dns=true in profile "ingress-addon-legacy-106000"
	I0330 08:54:47.828600   28906 addons.go:228] Setting addon ingress-dns=true in "ingress-addon-legacy-106000"
	I0330 08:54:47.828672   28906 host.go:66] Checking if "ingress-addon-legacy-106000" exists ...
	I0330 08:54:47.829660   28906 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-106000 --format={{.State.Status}}
	I0330 08:54:47.912598   28906 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0330 08:54:47.934613   28906 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0330 08:54:47.956596   28906 addons.go:420] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0330 08:54:47.956634   28906 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0330 08:54:47.956795   28906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-106000
	I0330 08:54:48.018121   28906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56053 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/ingress-addon-legacy-106000/id_rsa Username:docker}
	I0330 08:54:48.112296   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:54:48.166608   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:48.166652   28906 retry.go:31] will retry after 135.418611ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:48.304339   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:54:48.358706   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:48.358725   28906 retry.go:31] will retry after 284.220652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:48.645255   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:54:48.699974   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:48.699999   28906 retry.go:31] will retry after 347.105727ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:49.047866   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:54:49.102492   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:49.102509   28906 retry.go:31] will retry after 780.964362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:49.885237   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:54:49.943685   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:49.943706   28906 retry.go:31] will retry after 1.504335293s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:51.449165   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:54:51.506942   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:51.506967   28906 retry.go:31] will retry after 2.297377085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:53.806643   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:54:53.861135   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:53.861153   28906 retry.go:31] will retry after 3.472067274s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:57.334897   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:54:57.389162   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:54:57.389180   28906 retry.go:31] will retry after 3.795142908s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:01.186581   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:55:01.242928   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:01.242948   28906 retry.go:31] will retry after 5.537581962s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:06.782929   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:55:06.839663   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:06.839685   28906 retry.go:31] will retry after 5.884779907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:12.724836   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:55:12.779944   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:12.779963   28906 retry.go:31] will retry after 13.271712104s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:26.052436   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:55:26.108538   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:26.108555   28906 retry.go:31] will retry after 10.851235771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:36.960692   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:55:37.013522   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:55:37.013542   28906 retry.go:31] will retry after 25.124831731s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:56:02.141376   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:56:02.196862   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:56:02.196878   28906 retry.go:31] will retry after 44.242339249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:56:46.442469   28906 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0330 08:56:46.497732   28906 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0330 08:56:46.519688   28906 out.go:177] 
	W0330 08:56:46.541668   28906 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0330 08:56:46.541701   28906 out.go:239] * 
	* 
	W0330 08:56:46.546866   28906 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0330 08:56:46.568562   28906 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-106000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-106000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313",
	        "Created": "2023-03-30T15:49:12.677721073Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 436108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T15:49:12.970353998Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/hosts",
	        "LogPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313-json.log",
	        "Name": "/ingress-addon-legacy-106000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-106000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-106000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-106000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-106000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-106000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-106000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-106000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c00cf41442a419728a5152b77292cf773e5fae33b5394a47c989dc9e4397dc57",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56055"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56052"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c00cf41442a4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-106000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2a6214b3d75e",
	                        "ingress-addon-legacy-106000"
	                    ],
	                    "NetworkID": "011c97de28b919532250d90a05061b13150128fbac9bbec992ca90ccec248c29",
	                    "EndpointID": "d382d09528fed748f1d266f4bdd77c9254ca0344d24270164adc89765afd537f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-106000 -n ingress-addon-legacy-106000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-106000 -n ingress-addon-legacy-106000: exit status 6 (400.998727ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 08:56:47.046225   29024 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-106000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-106000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (119.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:176: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-106000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-106000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313",
	        "Created": "2023-03-30T15:49:12.677721073Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 436108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T15:49:12.970353998Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/hosts",
	        "LogPath": "/var/lib/docker/containers/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313/2a6214b3d75e4e89902f9e9846abadf2b19342b643c95dbeee88d02427923313-json.log",
	        "Name": "/ingress-addon-legacy-106000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-106000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-106000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/baa54d0526a53b40d95b42ba1702ab93f329ca09094b49bb3b9896af06acf90e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-106000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-106000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-106000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-106000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-106000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c00cf41442a419728a5152b77292cf773e5fae33b5394a47c989dc9e4397dc57",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56055"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56052"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c00cf41442a4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-106000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2a6214b3d75e",
	                        "ingress-addon-legacy-106000"
	                    ],
	                    "NetworkID": "011c97de28b919532250d90a05061b13150128fbac9bbec992ca90ccec248c29",
	                    "EndpointID": "d382d09528fed748f1d266f4bdd77c9254ca0344d24270164adc89765afd537f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-106000 -n ingress-addon-legacy-106000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-106000 -n ingress-addon-legacy-106000: exit status 6 (396.50832ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 08:56:47.504259   29038 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-106000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-106000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                    
x
+
TestSkaffold (43.49s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3025859520 version
skaffold_test.go:63: skaffold version: v2.3.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-124000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-124000 --memory=2600 --driver=docker : (25.50049896s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3025859520 run --minikube-profile skaffold-124000 --kube-context skaffold-124000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe3025859520 run --minikube-profile skaffold-124000 --kube-context skaffold-124000 --status-check=true --port-forward=false --interactive=false: exit status 1 (5.79436739s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	Starting build...
	Found [skaffold-124000] context, using local docker daemon.
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	#1 [internal] load .dockerignore
	#1 sha256:5ba29dd4109f8896947e4b351a93bc24dad8201f63e41cef378ab41692b8bf3e
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 sha256:75a817048d86910e123c5b193de7966df25d685d23c97186217645ad41d668ae
	#2 transferring dockerfile: 350B done
	#2 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10
	#4 sha256:3e6280708dea593be8ec70e0050e1a81cce57ccd8855e8cbe6de9abfeed8cee7
	#4 ...
	
	#3 [internal] load metadata for docker.io/library/alpine:3.10
	#3 sha256:ac8c9d4b8fc421ddf809bac2b79af6ebec0aa591815b5d2abf229ccdfba18d01
	#3 ERROR: error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `keychain cannot be accessed because the current session does not allow user interaction. The keychain may be locked; unlock it by running "security -v unlock-keychain ~/Library/Keychains/login.keychain-db" and try again``
	
	#4 [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10
	#4 sha256:3e6280708dea593be8ec70e0050e1a81cce57ccd8855e8cbe6de9abfeed8cee7
	#4 CANCELED
	------
	 > [internal] load metadata for docker.io/library/alpine:3.10:
	------
	alpine:3.10: error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `keychain cannot be accessed because the current session does not allow user interaction. The keychain may be locked; unlock it by running "security -v unlock-keychain ~/Library/Keychains/login.keychain-db" and try again``
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	Build [leeroy-app] was canceled

                                                
                                                
-- /stdout --
** stderr ** 
	build [leeroy-web] failed: exit status 1. Docker build ran into internal error. Please retry.
	If this keeps happening, please open an issue..

                                                
                                                
** /stderr **
skaffold_test.go:107: error running skaffold: exit status 1

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Not found. Building
	 - leeroy-app: Not found. Building
	Starting build...
	Found [skaffold-124000] context, using local docker daemon.
	Building [leeroy-web]...
	Target platforms: [linux/amd64]
	#1 [internal] load .dockerignore
	#1 sha256:5ba29dd4109f8896947e4b351a93bc24dad8201f63e41cef378ab41692b8bf3e
	#1 transferring context: 2B done
	#1 DONE 0.0s
	
	#2 [internal] load build definition from Dockerfile
	#2 sha256:75a817048d86910e123c5b193de7966df25d685d23c97186217645ad41d668ae
	#2 transferring dockerfile: 350B done
	#2 DONE 0.0s
	
	#4 [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10
	#4 sha256:3e6280708dea593be8ec70e0050e1a81cce57ccd8855e8cbe6de9abfeed8cee7
	#4 ...
	
	#3 [internal] load metadata for docker.io/library/alpine:3.10
	#3 sha256:ac8c9d4b8fc421ddf809bac2b79af6ebec0aa591815b5d2abf229ccdfba18d01
	#3 ERROR: error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `keychain cannot be accessed because the current session does not allow user interaction. The keychain may be locked; unlock it by running "security -v unlock-keychain ~/Library/Keychains/login.keychain-db" and try again``
	
	#4 [internal] load metadata for docker.io/library/golang:1.12.9-alpine3.10
	#4 sha256:3e6280708dea593be8ec70e0050e1a81cce57ccd8855e8cbe6de9abfeed8cee7
	#4 CANCELED
	------
	 > [internal] load metadata for docker.io/library/alpine:3.10:
	------
	alpine:3.10: error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `keychain cannot be accessed because the current session does not allow user interaction. The keychain may be locked; unlock it by running "security -v unlock-keychain ~/Library/Keychains/login.keychain-db" and try again``
	Building [leeroy-app]...
	Target platforms: [linux/amd64]
	Build [leeroy-app] was canceled

                                                
                                                
-- /stdout --
** stderr ** 
	build [leeroy-web] failed: exit status 1. Docker build ran into internal error. Please retry.
	If this keeps happening, please open an issue..

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-03-30 09:12:38.775672 -0700 PDT m=+2116.041627682
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect skaffold-124000
helpers_test.go:235: (dbg) docker inspect skaffold-124000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ec746aca711e55c236f299475b7972867cafa0b2218948ba3af594496ed893fe",
	        "Created": "2023-03-30T16:12:15.719451075Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 531662,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:12:16.020218863Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/ec746aca711e55c236f299475b7972867cafa0b2218948ba3af594496ed893fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ec746aca711e55c236f299475b7972867cafa0b2218948ba3af594496ed893fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/ec746aca711e55c236f299475b7972867cafa0b2218948ba3af594496ed893fe/hosts",
	        "LogPath": "/var/lib/docker/containers/ec746aca711e55c236f299475b7972867cafa0b2218948ba3af594496ed893fe/ec746aca711e55c236f299475b7972867cafa0b2218948ba3af594496ed893fe-json.log",
	        "Name": "/skaffold-124000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-124000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-124000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2726297600,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f4b611b288ba3569f2c54a6a627725e7e77584ebac401543ecb3ba26fdb78e30-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f4b611b288ba3569f2c54a6a627725e7e77584ebac401543ecb3ba26fdb78e30/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f4b611b288ba3569f2c54a6a627725e7e77584ebac401543ecb3ba26fdb78e30/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f4b611b288ba3569f2c54a6a627725e7e77584ebac401543ecb3ba26fdb78e30/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-124000",
	                "Source": "/var/lib/docker/volumes/skaffold-124000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-124000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-124000",
	                "name.minikube.sigs.k8s.io": "skaffold-124000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f004f19dbf0f6632ed40b98eef3a77e61f71bcd72597f47e45549d36fc68537f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56945"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56946"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56948"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56949"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f004f19dbf0f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-124000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "ec746aca711e",
	                        "skaffold-124000"
	                    ],
	                    "NetworkID": "f6356fc308b061a49e2eb2266b1635b09e6f14c315bd388ade0537dd24cd84cc",
	                    "EndpointID": "e145829fde5e44e678615171b8c42ce1152d57a860f837b8f1577897d089787e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-124000 -n skaffold-124000
helpers_test.go:244: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p skaffold-124000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p skaffold-124000 logs -n 25: (1.797588819s)
helpers_test.go:252: TestSkaffold logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	|  Command   |              Args              |        Profile        |   User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	| start      | -p multinode-950000-m02        | multinode-950000-m02  | jenkins  | v1.29.0 | 30 Mar 23 09:07 PDT |                     |
	|            | --driver=docker                |                       |          |         |                     |                     |
	| start      | -p multinode-950000-m03        | multinode-950000-m03  | jenkins  | v1.29.0 | 30 Mar 23 09:07 PDT | 30 Mar 23 09:07 PDT |
	|            | --driver=docker                |                       |          |         |                     |                     |
	| node       | add -p multinode-950000        | multinode-950000      | jenkins  | v1.29.0 | 30 Mar 23 09:07 PDT |                     |
	| delete     | -p multinode-950000-m03        | multinode-950000-m03  | jenkins  | v1.29.0 | 30 Mar 23 09:07 PDT | 30 Mar 23 09:07 PDT |
	| delete     | -p multinode-950000            | multinode-950000      | jenkins  | v1.29.0 | 30 Mar 23 09:07 PDT | 30 Mar 23 09:07 PDT |
	| start      | -p test-preload-521000         | test-preload-521000   | jenkins  | v1.29.0 | 30 Mar 23 09:07 PDT | 30 Mar 23 09:09 PDT |
	|            | --memory=2200                  |                       |          |         |                     |                     |
	|            | --alsologtostderr              |                       |          |         |                     |                     |
	|            | --wait=true --preload=false    |                       |          |         |                     |                     |
	|            | --driver=docker                |                       |          |         |                     |                     |
	|            | --kubernetes-version=v1.24.4   |                       |          |         |                     |                     |
	| ssh        | -p test-preload-521000         | test-preload-521000   | jenkins  | v1.29.0 | 30 Mar 23 09:09 PDT | 30 Mar 23 09:09 PDT |
	|            | -- docker pull                 |                       |          |         |                     |                     |
	|            | gcr.io/k8s-minikube/busybox    |                       |          |         |                     |                     |
	| stop       | -p test-preload-521000         | test-preload-521000   | jenkins  | v1.29.0 | 30 Mar 23 09:09 PDT | 30 Mar 23 09:09 PDT |
	| start      | -p test-preload-521000         | test-preload-521000   | jenkins  | v1.29.0 | 30 Mar 23 09:09 PDT | 30 Mar 23 09:10 PDT |
	|            | --memory=2200                  |                       |          |         |                     |                     |
	|            | --alsologtostderr -v=1         |                       |          |         |                     |                     |
	|            | --wait=true --driver=docker    |                       |          |         |                     |                     |
	| ssh        | -p test-preload-521000 --      | test-preload-521000   | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT | 30 Mar 23 09:10 PDT |
	|            | docker images                  |                       |          |         |                     |                     |
	| delete     | -p test-preload-521000         | test-preload-521000   | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT | 30 Mar 23 09:10 PDT |
	| start      | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT | 30 Mar 23 09:10 PDT |
	|            | --memory=2048 --driver=docker  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT |                     |
	|            | --schedule 5m                  |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:10 PDT | 30 Mar 23 09:10 PDT |
	|            | --cancel-scheduled             |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:11 PDT |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:11 PDT |                     |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| stop       | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:11 PDT | 30 Mar 23 09:11 PDT |
	|            | --schedule 15s                 |                       |          |         |                     |                     |
	| delete     | -p scheduled-stop-213000       | scheduled-stop-213000 | jenkins  | v1.29.0 | 30 Mar 23 09:11 PDT | 30 Mar 23 09:12 PDT |
	| start      | -p skaffold-124000             | skaffold-124000       | jenkins  | v1.29.0 | 30 Mar 23 09:12 PDT | 30 Mar 23 09:12 PDT |
	|            | --memory=2600 --driver=docker  |                       |          |         |                     |                     |
	| docker-env | --shell none -p                | skaffold-124000       | skaffold | v1.29.0 | 30 Mar 23 09:12 PDT | 30 Mar 23 09:12 PDT |
	|            | skaffold-124000                |                       |          |         |                     |                     |
	|            | --user=skaffold                |                       |          |         |                     |                     |
	|------------|--------------------------------|-----------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/30 09:12:07
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0330 09:12:07.475654   33884 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:12:07.475839   33884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:12:07.475841   33884 out.go:309] Setting ErrFile to fd 2...
	I0330 09:12:07.475844   33884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:12:07.475949   33884 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:12:07.477414   33884 out.go:303] Setting JSON to false
	I0330 09:12:07.497726   33884 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7895,"bootTime":1680184832,"procs":426,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 09:12:07.497810   33884 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 09:12:07.527835   33884 out.go:177] * [skaffold-124000] minikube v1.29.0 on Darwin 13.3
	I0330 09:12:07.570710   33884 notify.go:220] Checking for updates...
	I0330 09:12:07.592537   33884 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 09:12:07.613638   33884 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:12:07.634698   33884 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 09:12:07.655612   33884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 09:12:07.676626   33884 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 09:12:07.697405   33884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 09:12:07.718953   33884 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 09:12:07.785927   33884 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 09:12:07.786065   33884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:12:07.974575   33884 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 16:12:07.839615718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:12:07.996436   33884 out.go:177] * Using the docker driver based on user configuration
	I0330 09:12:08.018163   33884 start.go:295] selected driver: docker
	I0330 09:12:08.018175   33884 start.go:859] validating driver "docker" against <nil>
	I0330 09:12:08.018191   33884 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 09:12:08.022345   33884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:12:08.212816   33884 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 16:12:08.074903178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:12:08.212932   33884 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0330 09:12:08.213102   33884 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0330 09:12:08.234806   33884 out.go:177] * Using Docker Desktop driver with root privileges
	I0330 09:12:08.256576   33884 cni.go:84] Creating CNI manager for ""
	I0330 09:12:08.256602   33884 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:12:08.256618   33884 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0330 09:12:08.256629   33884 start_flags.go:319] config:
	{Name:skaffold-124000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:skaffold-124000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:12:08.299618   33884 out.go:177] * Starting control plane node skaffold-124000 in cluster skaffold-124000
	I0330 09:12:08.321550   33884 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 09:12:08.343403   33884 out.go:177] * Pulling base image ...
	I0330 09:12:08.385680   33884 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 09:12:08.385731   33884 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 09:12:08.385790   33884 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0330 09:12:08.385803   33884 cache.go:57] Caching tarball of preloaded images
	I0330 09:12:08.386014   33884 preload.go:174] Found /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0330 09:12:08.386030   33884 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.3 on docker
	I0330 09:12:08.387513   33884 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/config.json ...
	I0330 09:12:08.387816   33884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/config.json: {Name:mkfb4ea8e9ef3321b28aff7234a149182f0a3a60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:08.444663   33884 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 09:12:08.444676   33884 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 09:12:08.444693   33884 cache.go:193] Successfully downloaded all kic artifacts
	I0330 09:12:08.444749   33884 start.go:364] acquiring machines lock for skaffold-124000: {Name:mkd615d13a873d1dbb0ce50f7e00e1621001d4c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 09:12:08.444918   33884 start.go:368] acquired machines lock for "skaffold-124000" in 159.07µs
	I0330 09:12:08.444949   33884 start.go:93] Provisioning new machine with config: &{Name:skaffold-124000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:skaffold-124000 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 09:12:08.445021   33884 start.go:125] createHost starting for "" (driver="docker")
	I0330 09:12:08.488590   33884 out.go:204] * Creating docker container (CPUs=2, Memory=2600MB) ...
	I0330 09:12:08.488934   33884 start.go:159] libmachine.API.Create for "skaffold-124000" (driver="docker")
	I0330 09:12:08.488985   33884 client.go:168] LocalClient.Create starting
	I0330 09:12:08.489154   33884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem
	I0330 09:12:08.489231   33884 main.go:141] libmachine: Decoding PEM data...
	I0330 09:12:08.489256   33884 main.go:141] libmachine: Parsing certificate...
	I0330 09:12:08.489382   33884 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem
	I0330 09:12:08.489429   33884 main.go:141] libmachine: Decoding PEM data...
	I0330 09:12:08.489441   33884 main.go:141] libmachine: Parsing certificate...
	I0330 09:12:08.490276   33884 cli_runner.go:164] Run: docker network inspect skaffold-124000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0330 09:12:08.547567   33884 cli_runner.go:211] docker network inspect skaffold-124000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0330 09:12:08.547659   33884 network_create.go:281] running [docker network inspect skaffold-124000] to gather additional debugging logs...
	I0330 09:12:08.547672   33884 cli_runner.go:164] Run: docker network inspect skaffold-124000
	W0330 09:12:08.604796   33884 cli_runner.go:211] docker network inspect skaffold-124000 returned with exit code 1
	I0330 09:12:08.604817   33884 network_create.go:284] error running [docker network inspect skaffold-124000]: docker network inspect skaffold-124000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: skaffold-124000
	I0330 09:12:08.604830   33884 network_create.go:286] output of [docker network inspect skaffold-124000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: skaffold-124000
	
	** /stderr **
	I0330 09:12:08.604923   33884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0330 09:12:08.710823   33884 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0330 09:12:08.711168   33884 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005a8c80}
	I0330 09:12:08.711179   33884 network_create.go:123] attempt to create docker network skaffold-124000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0330 09:12:08.711249   33884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-124000 skaffold-124000
	W0330 09:12:08.769652   33884 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-124000 skaffold-124000 returned with exit code 1
	W0330 09:12:08.769689   33884 network_create.go:148] failed to create docker network skaffold-124000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-124000 skaffold-124000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0330 09:12:08.769708   33884 network_create.go:115] failed to create docker network skaffold-124000 192.168.58.0/24, will retry: subnet is taken
	I0330 09:12:08.771199   33884 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0330 09:12:08.771494   33884 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005a9ad0}
	I0330 09:12:08.771502   33884 network_create.go:123] attempt to create docker network skaffold-124000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0330 09:12:08.771562   33884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=skaffold-124000 skaffold-124000
	I0330 09:12:08.861678   33884 network_create.go:107] docker network skaffold-124000 192.168.67.0/24 created
	I0330 09:12:08.861708   33884 kic.go:117] calculated static IP "192.168.67.2" for the "skaffold-124000" container
	I0330 09:12:08.861824   33884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0330 09:12:08.920360   33884 cli_runner.go:164] Run: docker volume create skaffold-124000 --label name.minikube.sigs.k8s.io=skaffold-124000 --label created_by.minikube.sigs.k8s.io=true
	I0330 09:12:08.978558   33884 oci.go:103] Successfully created a docker volume skaffold-124000
	I0330 09:12:08.978693   33884 cli_runner.go:164] Run: docker run --rm --name skaffold-124000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-124000 --entrypoint /usr/bin/test -v skaffold-124000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0330 09:12:09.435437   33884 oci.go:107] Successfully prepared a docker volume skaffold-124000
	I0330 09:12:09.435467   33884 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 09:12:09.435479   33884 kic.go:190] Starting extracting preloaded images to volume ...
	I0330 09:12:09.435603   33884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-124000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0330 09:12:15.467088   33884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-124000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (6.031366621s)
	I0330 09:12:15.467106   33884 kic.go:199] duration metric: took 6.031582 seconds to extract preloaded images to volume
	I0330 09:12:15.467233   33884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0330 09:12:15.654570   33884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-124000 --name skaffold-124000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-124000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-124000 --network skaffold-124000 --ip 192.168.67.2 --volume skaffold-124000:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0330 09:12:16.028544   33884 cli_runner.go:164] Run: docker container inspect skaffold-124000 --format={{.State.Running}}
	I0330 09:12:16.093106   33884 cli_runner.go:164] Run: docker container inspect skaffold-124000 --format={{.State.Status}}
	I0330 09:12:16.162843   33884 cli_runner.go:164] Run: docker exec skaffold-124000 stat /var/lib/dpkg/alternatives/iptables
	I0330 09:12:16.278396   33884 oci.go:144] the created container "skaffold-124000" has a running status.
	I0330 09:12:16.278421   33884 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa...
	I0330 09:12:16.491016   33884 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0330 09:12:16.600363   33884 cli_runner.go:164] Run: docker container inspect skaffold-124000 --format={{.State.Status}}
	I0330 09:12:16.661474   33884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0330 09:12:16.661490   33884 kic_runner.go:114] Args: [docker exec --privileged skaffold-124000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0330 09:12:16.774530   33884 cli_runner.go:164] Run: docker container inspect skaffold-124000 --format={{.State.Status}}
	I0330 09:12:16.839684   33884 machine.go:88] provisioning docker machine ...
	I0330 09:12:16.839723   33884 ubuntu.go:169] provisioning hostname "skaffold-124000"
	I0330 09:12:16.839821   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:16.900437   33884 main.go:141] libmachine: Using SSH client type: native
	I0330 09:12:16.900837   33884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56945 <nil> <nil>}
	I0330 09:12:16.900851   33884 main.go:141] libmachine: About to run SSH command:
	sudo hostname skaffold-124000 && echo "skaffold-124000" | sudo tee /etc/hostname
	I0330 09:12:17.028867   33884 main.go:141] libmachine: SSH cmd err, output: <nil>: skaffold-124000
	
	I0330 09:12:17.028945   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:17.088746   33884 main.go:141] libmachine: Using SSH client type: native
	I0330 09:12:17.089087   33884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56945 <nil> <nil>}
	I0330 09:12:17.089098   33884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-124000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-124000/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-124000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 09:12:17.208969   33884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:12:17.208985   33884 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 09:12:17.209018   33884 ubuntu.go:177] setting up certificates
	I0330 09:12:17.209026   33884 provision.go:83] configureAuth start
	I0330 09:12:17.209109   33884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-124000
	I0330 09:12:17.267849   33884 provision.go:138] copyHostCerts
	I0330 09:12:17.267962   33884 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 09:12:17.267971   33884 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 09:12:17.268093   33884 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 09:12:17.268282   33884 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 09:12:17.268285   33884 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 09:12:17.268344   33884 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 09:12:17.268487   33884 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 09:12:17.268490   33884 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 09:12:17.268554   33884 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 09:12:17.268688   33884 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.skaffold-124000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube skaffold-124000]
	I0330 09:12:17.338283   33884 provision.go:172] copyRemoteCerts
	I0330 09:12:17.338352   33884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 09:12:17.338410   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:17.398488   33884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56945 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa Username:docker}
	I0330 09:12:17.485135   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 09:12:17.533203   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0330 09:12:17.550747   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0330 09:12:17.567897   33884 provision.go:86] duration metric: configureAuth took 358.857345ms
	I0330 09:12:17.567905   33884 ubuntu.go:193] setting minikube options for container-runtime
	I0330 09:12:17.568043   33884 config.go:182] Loaded profile config "skaffold-124000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:12:17.568099   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:17.628842   33884 main.go:141] libmachine: Using SSH client type: native
	I0330 09:12:17.629182   33884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56945 <nil> <nil>}
	I0330 09:12:17.629194   33884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 09:12:17.747839   33884 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 09:12:17.747848   33884 ubuntu.go:71] root file system type: overlay
	I0330 09:12:17.747924   33884 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 09:12:17.747997   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:17.807733   33884 main.go:141] libmachine: Using SSH client type: native
	I0330 09:12:17.808066   33884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56945 <nil> <nil>}
	I0330 09:12:17.808117   33884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 09:12:17.935458   33884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 09:12:17.935562   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:17.996578   33884 main.go:141] libmachine: Using SSH client type: native
	I0330 09:12:17.996932   33884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 56945 <nil> <nil>}
	I0330 09:12:17.996945   33884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 09:12:18.612455   33884 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:12:17.932565829 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0330 09:12:18.612473   33884 machine.go:91] provisioned docker machine in 1.772760082s
	I0330 09:12:18.612477   33884 client.go:171] LocalClient.Create took 10.123414411s
	I0330 09:12:18.612495   33884 start.go:167] duration metric: libmachine.API.Create for "skaffold-124000" took 10.123489124s
	I0330 09:12:18.612501   33884 start.go:300] post-start starting for "skaffold-124000" (driver="docker")
	I0330 09:12:18.612504   33884 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 09:12:18.612575   33884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 09:12:18.612632   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:18.673752   33884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56945 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa Username:docker}
	I0330 09:12:18.761435   33884 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 09:12:18.765048   33884 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 09:12:18.765069   33884 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 09:12:18.765075   33884 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 09:12:18.765078   33884 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 09:12:18.765085   33884 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 09:12:18.765172   33884 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 09:12:18.765334   33884 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 09:12:18.765524   33884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 09:12:18.772874   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:12:18.790817   33884 start.go:303] post-start completed in 178.302256ms
	I0330 09:12:18.791322   33884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-124000
	I0330 09:12:18.851369   33884 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/config.json ...
	I0330 09:12:18.851788   33884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:12:18.851843   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:18.913120   33884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56945 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa Username:docker}
	I0330 09:12:18.996870   33884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 09:12:19.001603   33884 start.go:128] duration metric: createHost completed in 10.556497055s
	I0330 09:12:19.001614   33884 start.go:83] releasing machines lock for "skaffold-124000", held for 10.556612277s
	I0330 09:12:19.001709   33884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-124000
	I0330 09:12:19.060109   33884 ssh_runner.go:195] Run: cat /version.json
	I0330 09:12:19.060127   33884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0330 09:12:19.060170   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:19.060199   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:19.124394   33884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56945 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa Username:docker}
	I0330 09:12:19.124602   33884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56945 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa Username:docker}
	I0330 09:12:19.258457   33884 ssh_runner.go:195] Run: systemctl --version
	I0330 09:12:19.263214   33884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0330 09:12:19.268481   33884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0330 09:12:19.289456   33884 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0330 09:12:19.289506   33884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0330 09:12:19.305050   33884 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0330 09:12:19.305063   33884 start.go:481] detecting cgroup driver to use...
	I0330 09:12:19.305073   33884 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:12:19.305145   33884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:12:19.318645   33884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0330 09:12:19.327253   33884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 09:12:19.335739   33884 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 09:12:19.335790   33884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 09:12:19.344300   33884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:12:19.352968   33884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 09:12:19.361631   33884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:12:19.370323   33884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 09:12:19.378440   33884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 09:12:19.386838   33884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 09:12:19.394064   33884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 09:12:19.401232   33884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:12:19.465886   33884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 09:12:19.541728   33884 start.go:481] detecting cgroup driver to use...
	I0330 09:12:19.541741   33884 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:12:19.541807   33884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 09:12:19.552451   33884 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 09:12:19.552513   33884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 09:12:19.562911   33884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:12:19.577113   33884 ssh_runner.go:195] Run: which cri-dockerd
	I0330 09:12:19.581517   33884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 09:12:19.589514   33884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0330 09:12:19.603797   33884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 09:12:19.694459   33884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 09:12:19.789408   33884 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 09:12:19.789423   33884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 09:12:19.803670   33884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:12:19.889519   33884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 09:12:20.116105   33884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:12:20.186624   33884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0330 09:12:20.256284   33884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:12:20.325325   33884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:12:20.394452   33884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0330 09:12:20.405900   33884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:12:20.478472   33884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0330 09:12:20.558194   33884 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0330 09:12:20.558305   33884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0330 09:12:20.562931   33884 start.go:549] Will wait 60s for crictl version
	I0330 09:12:20.562993   33884 ssh_runner.go:195] Run: which crictl
	I0330 09:12:20.566852   33884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0330 09:12:20.599382   33884 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0330 09:12:20.599459   33884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:12:20.624566   33884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:12:20.694493   33884 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 23.0.1 ...
	I0330 09:12:20.694592   33884 cli_runner.go:164] Run: docker exec -t skaffold-124000 dig +short host.docker.internal
	I0330 09:12:20.809200   33884 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 09:12:20.809333   33884 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 09:12:20.813713   33884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:12:20.823949   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:20.884588   33884 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 09:12:20.884666   33884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:12:20.906097   33884 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0330 09:12:20.906112   33884 docker.go:569] Images already preloaded, skipping extraction
	I0330 09:12:20.906185   33884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:12:20.925812   33884 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0330 09:12:20.925823   33884 cache_images.go:84] Images are preloaded, skipping loading
	I0330 09:12:20.925899   33884 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 09:12:20.952150   33884 cni.go:84] Creating CNI manager for ""
	I0330 09:12:20.952167   33884 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:12:20.952179   33884 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0330 09:12:20.952199   33884 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-124000 NodeName:skaffold-124000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 09:12:20.952330   33884 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "skaffold-124000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 09:12:20.952424   33884 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=skaffold-124000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:skaffold-124000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0330 09:12:20.952487   33884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0330 09:12:20.960601   33884 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 09:12:20.960650   33884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 09:12:20.968133   33884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0330 09:12:20.981067   33884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0330 09:12:20.994311   33884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes)
	I0330 09:12:21.007329   33884 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0330 09:12:21.011174   33884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:12:21.021012   33884 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000 for IP: 192.168.67.2
	I0330 09:12:21.021028   33884 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:21.021216   33884 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 09:12:21.021323   33884 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 09:12:21.021362   33884 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/client.key
	I0330 09:12:21.021373   33884 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/client.crt with IP's: []
	I0330 09:12:21.094538   33884 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/client.crt ...
	I0330 09:12:21.094544   33884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/client.crt: {Name:mka6bcc5e38ce5385863db325cab841156d4ccca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:21.094834   33884 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/client.key ...
	I0330 09:12:21.094838   33884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/client.key: {Name:mkc342dc9ba1c9241c5e1741d05ec22b468fade8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:21.095044   33884 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.key.c7fa3a9e
	I0330 09:12:21.095054   33884 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0330 09:12:21.269106   33884 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.crt.c7fa3a9e ...
	I0330 09:12:21.269114   33884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.crt.c7fa3a9e: {Name:mkf8eeb7881176aa06c066f90bb90fadcb218c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:21.269357   33884 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.key.c7fa3a9e ...
	I0330 09:12:21.269361   33884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.key.c7fa3a9e: {Name:mk326242bc838d7d1e92800b8c72f2095204e61f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:21.269561   33884 certs.go:333] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.crt
	I0330 09:12:21.269732   33884 certs.go:337] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.key
	I0330 09:12:21.269887   33884 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/proxy-client.key
	I0330 09:12:21.269897   33884 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/proxy-client.crt with IP's: []
	I0330 09:12:21.330078   33884 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/proxy-client.crt ...
	I0330 09:12:21.330083   33884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/proxy-client.crt: {Name:mk2f6bd17868e8e5554dac19b570b9d65b00fd7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:21.330280   33884 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/proxy-client.key ...
	I0330 09:12:21.330284   33884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/proxy-client.key: {Name:mkcdde4066cb3b2ecf63b4709964542a3a00e37c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:21.330655   33884 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 09:12:21.330700   33884 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 09:12:21.330709   33884 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 09:12:21.330739   33884 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 09:12:21.330772   33884 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 09:12:21.330802   33884 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 09:12:21.330867   33884 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:12:21.331404   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 09:12:21.350733   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0330 09:12:21.368110   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 09:12:21.385612   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/skaffold-124000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0330 09:12:21.403205   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 09:12:21.421025   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 09:12:21.438453   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 09:12:21.455799   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 09:12:21.473384   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 09:12:21.490780   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 09:12:21.508848   33884 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 09:12:21.526398   33884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 09:12:21.539569   33884 ssh_runner.go:195] Run: openssl version
	I0330 09:12:21.545114   33884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 09:12:21.553583   33884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:12:21.557585   33884 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:12:21.557622   33884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:12:21.563036   33884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 09:12:21.571095   33884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 09:12:21.579223   33884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 09:12:21.583243   33884 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 09:12:21.583291   33884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 09:12:21.588819   33884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 09:12:21.597056   33884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 09:12:21.605367   33884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 09:12:21.609341   33884 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 09:12:21.609377   33884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 09:12:21.614868   33884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 09:12:21.622862   33884 kubeadm.go:401] StartCluster: {Name:skaffold-124000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:skaffold-124000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP:}
	I0330 09:12:21.622958   33884 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:12:21.642761   33884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 09:12:21.650611   33884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:12:21.658236   33884 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:12:21.658305   33884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:12:21.665803   33884 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:12:21.665826   33884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:12:21.718281   33884 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
	I0330 09:12:21.718320   33884 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:12:21.826498   33884 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:12:21.826582   33884 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:12:21.826648   33884 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:12:21.958415   33884 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:12:22.001608   33884 out.go:204]   - Generating certificates and keys ...
	I0330 09:12:22.001686   33884 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:12:22.001750   33884 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:12:22.046254   33884 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0330 09:12:22.259937   33884 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0330 09:12:22.351229   33884 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0330 09:12:22.495047   33884 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0330 09:12:22.709385   33884 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0330 09:12:22.709485   33884 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost skaffold-124000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0330 09:12:22.986334   33884 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0330 09:12:22.986455   33884 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost skaffold-124000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0330 09:12:23.125016   33884 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0330 09:12:23.432853   33884 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0330 09:12:23.602678   33884 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0330 09:12:23.602819   33884 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:12:23.705203   33884 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:12:23.896109   33884 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:12:23.994565   33884 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:12:24.281320   33884 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:12:24.291867   33884 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:12:24.292469   33884 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:12:24.292507   33884 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0330 09:12:24.364099   33884 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:12:24.385644   33884 out.go:204]   - Booting up control plane ...
	I0330 09:12:24.385725   33884 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:12:24.385794   33884 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:12:24.385848   33884 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:12:24.385910   33884 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:12:24.386048   33884 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:12:29.872580   33884 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502610 seconds
	I0330 09:12:29.872700   33884 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0330 09:12:29.881024   33884 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0330 09:12:30.398092   33884 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0330 09:12:30.398249   33884 kubeadm.go:322] [mark-control-plane] Marking the node skaffold-124000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0330 09:12:30.906067   33884 kubeadm.go:322] [bootstrap-token] Using token: l37td3.u636jklcga9078yb
	I0330 09:12:30.945187   33884 out.go:204]   - Configuring RBAC rules ...
	I0330 09:12:30.945328   33884 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0330 09:12:30.948555   33884 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0330 09:12:30.988403   33884 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0330 09:12:30.991462   33884 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0330 09:12:30.994194   33884 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0330 09:12:30.996617   33884 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0330 09:12:31.005074   33884 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0330 09:12:31.158737   33884 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0330 09:12:31.352677   33884 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0330 09:12:31.353172   33884 kubeadm.go:322] 
	I0330 09:12:31.353237   33884 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0330 09:12:31.353242   33884 kubeadm.go:322] 
	I0330 09:12:31.353328   33884 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0330 09:12:31.353332   33884 kubeadm.go:322] 
	I0330 09:12:31.353370   33884 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0330 09:12:31.353449   33884 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0330 09:12:31.353516   33884 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0330 09:12:31.353521   33884 kubeadm.go:322] 
	I0330 09:12:31.353590   33884 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0330 09:12:31.353593   33884 kubeadm.go:322] 
	I0330 09:12:31.353635   33884 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0330 09:12:31.353638   33884 kubeadm.go:322] 
	I0330 09:12:31.353684   33884 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0330 09:12:31.353796   33884 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0330 09:12:31.353878   33884 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0330 09:12:31.353882   33884 kubeadm.go:322] 
	I0330 09:12:31.353966   33884 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0330 09:12:31.354027   33884 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0330 09:12:31.354030   33884 kubeadm.go:322] 
	I0330 09:12:31.354093   33884 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l37td3.u636jklcga9078yb \
	I0330 09:12:31.354186   33884 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1e834fc5c7ed9a2912dba9ac16dbe1efd1198393393505f3962bef154a0134ee \
	I0330 09:12:31.354201   33884 kubeadm.go:322] 	--control-plane 
	I0330 09:12:31.354203   33884 kubeadm.go:322] 
	I0330 09:12:31.354272   33884 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0330 09:12:31.354276   33884 kubeadm.go:322] 
	I0330 09:12:31.354338   33884 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l37td3.u636jklcga9078yb \
	I0330 09:12:31.354454   33884 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1e834fc5c7ed9a2912dba9ac16dbe1efd1198393393505f3962bef154a0134ee 
	I0330 09:12:31.358153   33884 kubeadm.go:322] W0330 16:12:21.711071    1316 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0330 09:12:31.358260   33884 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0330 09:12:31.358421   33884 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:12:31.358450   33884 cni.go:84] Creating CNI manager for ""
	I0330 09:12:31.358459   33884 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:12:31.419298   33884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0330 09:12:31.456288   33884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0330 09:12:31.469840   33884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0330 09:12:31.536696   33884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0330 09:12:31.536783   33884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:12:31.536786   33884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=e1b28cf61afe27b0a5598da1ee43bf06463b8063 minikube.k8s.io/name=skaffold-124000 minikube.k8s.io/updated_at=2023_03_30T09_12_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:12:31.544891   33884 ops.go:34] apiserver oom_adj: -16
	I0330 09:12:31.665522   33884 kubeadm.go:1073] duration metric: took 128.811257ms to wait for elevateKubeSystemPrivileges.
	I0330 09:12:31.665533   33884 kubeadm.go:403] StartCluster complete in 10.04260413s
	I0330 09:12:31.665549   33884 settings.go:142] acquiring lock: {Name:mkee06510b0682aea765fc9cbf62cdda0355bccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:31.665641   33884 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:12:31.666134   33884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:12:31.666376   33884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0330 09:12:31.666402   33884 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0330 09:12:31.666517   33884 addons.go:66] Setting storage-provisioner=true in profile "skaffold-124000"
	I0330 09:12:31.666520   33884 addons.go:66] Setting default-storageclass=true in profile "skaffold-124000"
	I0330 09:12:31.666530   33884 addons.go:228] Setting addon storage-provisioner=true in "skaffold-124000"
	I0330 09:12:31.666532   33884 config.go:182] Loaded profile config "skaffold-124000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:12:31.666533   33884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-124000"
	I0330 09:12:31.666567   33884 host.go:66] Checking if "skaffold-124000" exists ...
	I0330 09:12:31.666836   33884 cli_runner.go:164] Run: docker container inspect skaffold-124000 --format={{.State.Status}}
	I0330 09:12:31.666902   33884 cli_runner.go:164] Run: docker container inspect skaffold-124000 --format={{.State.Status}}
	I0330 09:12:31.746053   33884 addons.go:228] Setting addon default-storageclass=true in "skaffold-124000"
	I0330 09:12:31.764465   33884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0330 09:12:31.764516   33884 host.go:66] Checking if "skaffold-124000" exists ...
	I0330 09:12:31.778160   33884 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0330 09:12:31.784416   33884 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:12:31.784422   33884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0330 09:12:31.784485   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:31.786443   33884 cli_runner.go:164] Run: docker container inspect skaffold-124000 --format={{.State.Status}}
	I0330 09:12:31.863102   33884 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0330 09:12:31.863116   33884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0330 09:12:31.863209   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:31.865116   33884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56945 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa Username:docker}
	I0330 09:12:31.931000   33884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56945 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/skaffold-124000/id_rsa Username:docker}
	I0330 09:12:32.032046   33884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:12:32.040841   33884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0330 09:12:32.191167   33884 kapi.go:248] "coredns" deployment in "kube-system" namespace and "skaffold-124000" context rescaled to 1 replicas
	I0330 09:12:32.191186   33884 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 09:12:32.212803   33884 out.go:177] * Verifying Kubernetes components...
	I0330 09:12:32.254554   33884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:12:32.637719   33884 start.go:917] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0330 09:12:32.679554   33884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-124000
	I0330 09:12:32.703065   33884 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0330 09:12:32.739367   33884 addons.go:499] enable addons completed in 1.072929199s: enabled=[storage-provisioner default-storageclass]
	I0330 09:12:32.765830   33884 api_server.go:51] waiting for apiserver process to appear ...
	I0330 09:12:32.765866   33884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:12:32.776046   33884 api_server.go:71] duration metric: took 584.83831ms to wait for apiserver process to appear ...
	I0330 09:12:32.776061   33884 api_server.go:87] waiting for apiserver healthz status ...
	I0330 09:12:32.776076   33884 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56949/healthz ...
	I0330 09:12:32.781547   33884 api_server.go:278] https://127.0.0.1:56949/healthz returned 200:
	ok
	I0330 09:12:32.782926   33884 api_server.go:140] control plane version: v1.26.3
	I0330 09:12:32.782933   33884 api_server.go:130] duration metric: took 6.868529ms to wait for apiserver health ...
	I0330 09:12:32.782939   33884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0330 09:12:32.787165   33884 system_pods.go:59] 5 kube-system pods found
	I0330 09:12:32.787172   33884 system_pods.go:61] "etcd-skaffold-124000" [e1164e0d-958b-4ba2-ad8e-30d93ae3f280] Pending
	I0330 09:12:32.787178   33884 system_pods.go:61] "kube-apiserver-skaffold-124000" [bdbc8da4-1971-4abd-a8f5-7278a70f467d] Pending
	I0330 09:12:32.787180   33884 system_pods.go:61] "kube-controller-manager-skaffold-124000" [2e843d08-488b-4f84-8eef-a088a6a6589e] Pending
	I0330 09:12:32.787182   33884 system_pods.go:61] "kube-scheduler-skaffold-124000" [ddf737a3-53d8-40b4-9711-1584df693269] Pending
	I0330 09:12:32.787187   33884 system_pods.go:61] "storage-provisioner" [6be139f3-c768-4551-9261-ce0eb9d4ec31] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0330 09:12:32.787191   33884 system_pods.go:74] duration metric: took 4.24831ms to wait for pod list to return data ...
	I0330 09:12:32.787195   33884 kubeadm.go:578] duration metric: took 595.99012ms to wait for : map[apiserver:true system_pods:true] ...
	I0330 09:12:32.787202   33884 node_conditions.go:102] verifying NodePressure condition ...
	I0330 09:12:32.789626   33884 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0330 09:12:32.789634   33884 node_conditions.go:123] node cpu capacity is 6
	I0330 09:12:32.789643   33884 node_conditions.go:105] duration metric: took 2.439981ms to run NodePressure ...
	I0330 09:12:32.789649   33884 start.go:228] waiting for startup goroutines ...
	I0330 09:12:32.789653   33884 start.go:233] waiting for cluster config update ...
	I0330 09:12:32.789661   33884 start.go:242] writing updated cluster config ...
	I0330 09:12:32.789985   33884 ssh_runner.go:195] Run: rm -f paused
	I0330 09:12:32.828992   33884 start.go:557] kubectl: 1.25.4, cluster: 1.26.3 (minor skew: 1)
	I0330 09:12:32.866730   33884 out.go:177] * Done! kubectl is now configured to use "skaffold-124000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-03-30 16:12:16 UTC, end at Thu 2023-03-30 16:12:39 UTC. --
	Mar 30 16:12:20 skaffold-124000 cri-dockerd[1048]: time="2023-03-30T16:12:20Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Mar 30 16:12:20 skaffold-124000 cri-dockerd[1048]: time="2023-03-30T16:12:20Z" level=info msg="Start cri-dockerd grpc backend"
	Mar 30 16:12:20 skaffold-124000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Mar 30 16:12:25 skaffold-124000 cri-dockerd[1048]: time="2023-03-30T16:12:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e0387e4e092cd47c0f0a5e20bd892f58f35f102215d3d9ede6b01f9a5cbe2824/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 30 16:12:25 skaffold-124000 cri-dockerd[1048]: time="2023-03-30T16:12:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6889f2fef5d520ec06ac28b9e89e8b065afbc3e25232e9b7173a059785f758e/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 30 16:12:25 skaffold-124000 cri-dockerd[1048]: time="2023-03-30T16:12:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fa234f3cd8c88406887100f2f159890727509a595dd5ff92fd96c61f6e8ad997/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 30 16:12:25 skaffold-124000 cri-dockerd[1048]: time="2023-03-30T16:12:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3da5ac4c34a0ebe7f833bcf617eee4864cf9dbb29b9e6844e8c05e4d07692485/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128208655Z" level=info msg="[core] [Channel #8] Channel created" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128340323Z" level=info msg="[core] [Channel #8] original dial target is: \"localhost\"" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128359175Z" level=info msg="[core] [Channel #8] parsed dial target is: {Scheme: Authority: Endpoint:localhost URL:{Scheme: Opaque: User: Host: Path:localhost RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128367163Z" level=info msg="[core] [Channel #8] fallback to scheme \"passthrough\"" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128376812Z" level=info msg="[core] [Channel #8] parsed dial target is: {Scheme:passthrough Authority: Endpoint:localhost URL:{Scheme:passthrough Opaque: User: Host: Path:/localhost RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128384704Z" level=info msg="[core] [Channel #8] Channel authority set to \"localhost\"" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128432811Z" level=info msg="[core] [Channel #8] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"localhost\",\n      \"ServerName\": \"\",\n      \"Attributes\": null,\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128454932Z" level=info msg="[core] [Channel #8] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128476542Z" level=info msg="[core] [Channel #8 SubChannel #9] Subchannel created" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128638021Z" level=info msg="[core] [Channel #8 SubChannel #9] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128661848Z" level=info msg="[core] [Channel #8 SubChannel #9] Subchannel picks a new address \"localhost\" to connect" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.128734966Z" level=info msg="[core] [Channel #8] Channel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.129084941Z" level=info msg="[core] [Channel #8 SubChannel #9] Subchannel Connectivity change to READY" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.129101184Z" level=info msg="[core] [Channel #8] Channel Connectivity change to READY" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.532814705Z" level=info msg="trying next host" error="error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `keychain cannot be accessed because the current session does not allow user interaction. The keychain may be locked; unlock it by running \"security -v unlock-keychain ~/Library/Keychains/login.keychain-db\" and try again``" host=registry-1.docker.io
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.532976188Z" level=info msg="trying next host" error="Canceled: context canceled" host=registry-1.docker.io
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.534408785Z" level=info msg="[core] [Channel #8 SubChannel #9] Subchannel Connectivity change to IDLE" module=grpc
	Mar 30 16:12:36 skaffold-124000 dockerd[833]: time="2023-03-30T16:12:36.534595957Z" level=info msg="[core] [Channel #8] Channel Connectivity change to IDLE" module=grpc
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1d5c54acf0fce       fce326961ae2d       15 seconds ago      Running             etcd                      0                   a6889f2fef5d5
	121fcc638fe59       1d9b3cbae03ce       15 seconds ago      Running             kube-apiserver            0                   fa234f3cd8c88
	c892692cd3b6a       5a79047369329       15 seconds ago      Running             kube-scheduler            0                   3da5ac4c34a0e
	6c33f50080d45       ce8c2293ef09c       15 seconds ago      Running             kube-controller-manager   0                   e0387e4e092cd
	
	* 
	* ==> describe nodes <==
	* Name:               skaffold-124000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=skaffold-124000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e1b28cf61afe27b0a5598da1ee43bf06463b8063
	                    minikube.k8s.io/name=skaffold-124000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_30T09_12_31_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 30 Mar 2023 16:12:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-124000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 30 Mar 2023 16:12:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 30 Mar 2023 16:12:31 +0000   Thu, 30 Mar 2023 16:12:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 30 Mar 2023 16:12:31 +0000   Thu, 30 Mar 2023 16:12:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 30 Mar 2023 16:12:31 +0000   Thu, 30 Mar 2023 16:12:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 30 Mar 2023 16:12:31 +0000   Thu, 30 Mar 2023 16:12:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    skaffold-124000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 b249c14bbd9147e887f6315aff00ef06
	  System UUID:                b249c14bbd9147e887f6315aff00ef06
	  Boot ID:                    b745a502-078f-4e66-a21d-1fdb66506a40
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.3
	  Kube-Proxy Version:         v1.26.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-skaffold-124000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         9s
	  kube-system                 kube-apiserver-skaffold-124000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kube-controller-manager-skaffold-124000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 kube-scheduler-skaffold-124000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 16s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  16s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15s (x5 over 16s)  kubelet  Node skaffold-124000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15s (x4 over 16s)  kubelet  Node skaffold-124000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15s (x4 over 16s)  kubelet  Node skaffold-124000 status is now: NodeHasSufficientPID
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9s                 kubelet  Node skaffold-124000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s                 kubelet  Node skaffold-124000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s                 kubelet  Node skaffold-124000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.000069] FS-Cache: O-key=[8] '92ec050500000000'
	[  +0.000058] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000063] FS-Cache: N-cookie d=00000000841d7711{9p.inode} n=00000000b5c08eef
	[  +0.000075] FS-Cache: N-key=[8] '92ec050500000000'
	[  +0.002671] FS-Cache: Duplicate cookie detected
	[  +0.000030] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000085] FS-Cache: O-cookie d=00000000841d7711{9p.inode} n=00000000195e1b81
	[  +0.000062] FS-Cache: O-key=[8] '92ec050500000000'
	[  +0.000063] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000054] FS-Cache: N-cookie d=00000000841d7711{9p.inode} n=00000000658f0f61
	[  +0.000049] FS-Cache: N-key=[8] '92ec050500000000'
	[  +3.552031] FS-Cache: Duplicate cookie detected
	[  +0.000056] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000038] FS-Cache: O-cookie d=00000000841d7711{9p.inode} n=0000000041fed827
	[  +0.000076] FS-Cache: O-key=[8] '91ec050500000000'
	[  +0.000046] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000054] FS-Cache: N-cookie d=00000000841d7711{9p.inode} n=00000000c6ec446f
	[  +0.000072] FS-Cache: N-key=[8] '91ec050500000000'
	[  +0.507564] FS-Cache: Duplicate cookie detected
	[  +0.000076] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000040] FS-Cache: O-cookie d=00000000841d7711{9p.inode} n=00000000af66c14c
	[  +0.000069] FS-Cache: O-key=[8] '98ec050500000000'
	[  +0.000040] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000056] FS-Cache: N-cookie d=00000000841d7711{9p.inode} n=000000009516b7ed
	[  +0.000073] FS-Cache: N-key=[8] '98ec050500000000'
	
	* 
	* ==> etcd [1d5c54acf0fc] <==
	* {"level":"info","ts":"2023-03-30T16:12:26.057Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-30T16:12:26.057Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-30T16:12:26.057Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-30T16:12:26.057Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-30T16:12:26.057Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-30T16:12:26.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-03-30T16:12:26.058Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-03-30T16:12:26.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2023-03-30T16:12:26.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-03-30T16:12:26.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-03-30T16:12:26.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-03-30T16:12:26.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-03-30T16:12:26.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-03-30T16:12:26.239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-03-30T16:12:26.241Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:skaffold-124000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-30T16:12:26.241Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-30T16:12:26.241Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-30T16:12:26.241Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-30T16:12:26.241Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-30T16:12:26.241Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-30T16:12:26.241Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-03-30T16:12:26.243Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-30T16:12:26.246Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-30T16:12:26.246Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-30T16:12:26.246Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  16:12:40 up  2:11,  0 users,  load average: 0.36, 0.96, 1.03
	Linux skaffold-124000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [121fcc638fe5] <==
	* I0330 16:12:27.937690       1 controller.go:615] quota admission added evaluator for: namespaces
	I0330 16:12:27.942453       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0330 16:12:27.986376       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0330 16:12:27.986482       1 cache.go:39] Caches are synced for autoregister controller
	I0330 16:12:27.986635       1 shared_informer.go:280] Caches are synced for configmaps
	I0330 16:12:27.986668       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0330 16:12:27.986689       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0330 16:12:27.986818       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0330 16:12:27.986844       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0330 16:12:27.986865       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0330 16:12:28.002405       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0330 16:12:28.709453       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0330 16:12:28.891417       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0330 16:12:28.893884       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0330 16:12:28.893919       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0330 16:12:29.374524       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0330 16:12:29.440912       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0330 16:12:29.512440       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0330 16:12:29.529679       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0330 16:12:29.530771       1 controller.go:615] quota admission added evaluator for: endpoints
	I0330 16:12:29.534495       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0330 16:12:29.949826       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0330 16:12:31.149129       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0330 16:12:31.157177       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0330 16:12:31.163637       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [6c33f50080d4] <==
	* I0330 16:12:30.750808       1 controllermanager.go:622] Started "csrsigning"
	I0330 16:12:30.750869       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0330 16:12:30.750799       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0330 16:12:30.750859       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I0330 16:12:30.751011       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0330 16:12:30.900832       1 controllermanager.go:622] Started "persistentvolume-binder"
	W0330 16:12:30.900870       1 core.go:221] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.
	W0330 16:12:30.900877       1 controllermanager.go:600] Skipping "route"
	I0330 16:12:30.900922       1 pv_controller_base.go:318] Starting persistent volume controller
	I0330 16:12:30.900931       1 shared_informer.go:273] Waiting for caches to sync for persistent volume
	I0330 16:12:31.050025       1 controllermanager.go:622] Started "ephemeral-volume"
	I0330 16:12:31.050089       1 controller.go:169] Starting ephemeral volume controller
	I0330 16:12:31.050095       1 shared_informer.go:273] Waiting for caches to sync for ephemeral
	I0330 16:12:31.231833       1 controllermanager.go:622] Started "job"
	I0330 16:12:31.231920       1 job_controller.go:191] Starting job controller
	I0330 16:12:31.231926       1 shared_informer.go:273] Waiting for caches to sync for job
	I0330 16:12:31.531191       1 controllermanager.go:622] Started "horizontalpodautoscaling"
	I0330 16:12:31.531272       1 horizontal.go:181] Starting HPA controller
	I0330 16:12:31.531280       1 shared_informer.go:273] Waiting for caches to sync for HPA
	I0330 16:12:31.548412       1 controllermanager.go:622] Started "csrapproving"
	I0330 16:12:31.548485       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0330 16:12:31.548492       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving
	I0330 16:12:31.700050       1 controllermanager.go:622] Started "ttl"
	I0330 16:12:31.700123       1 ttl_controller.go:120] Starting TTL controller
	I0330 16:12:31.700130       1 shared_informer.go:273] Waiting for caches to sync for TTL
	
	* 
	* ==> kube-scheduler [c892692cd3b6] <==
	* E0330 16:12:27.948298       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0330 16:12:27.948308       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0330 16:12:27.948420       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0330 16:12:27.948484       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0330 16:12:28.791177       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0330 16:12:28.791238       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0330 16:12:28.847269       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0330 16:12:28.847337       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0330 16:12:28.866704       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0330 16:12:28.866748       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0330 16:12:28.942685       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0330 16:12:28.942733       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0330 16:12:28.982665       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0330 16:12:28.982712       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0330 16:12:29.060817       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0330 16:12:29.060866       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0330 16:12:29.088342       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0330 16:12:29.088394       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0330 16:12:29.100863       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0330 16:12:29.100932       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0330 16:12:29.170267       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0330 16:12:29.170312       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0330 16:12:29.178463       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0330 16:12:29.178514       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0330 16:12:31.046776       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-03-30 16:12:16 UTC, end at Thu 2023-03-30 16:12:40 UTC. --
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529152    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b25de4fc9fddd4d8fc32fbf1f4332b36-kubeconfig\") pod \"kube-scheduler-skaffold-124000\" (UID: \"b25de4fc9fddd4d8fc32fbf1f4332b36\") " pod="kube-system/kube-scheduler-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529183    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/247495118e3aa0e015ff01c6b4eb9589-usr-local-share-ca-certificates\") pod \"kube-apiserver-skaffold-124000\" (UID: \"247495118e3aa0e015ff01c6b4eb9589\") " pod="kube-system/kube-apiserver-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529227    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e83c4d7f1e15d8cfbce08853c9954ede-ca-certs\") pod \"kube-controller-manager-skaffold-124000\" (UID: \"e83c4d7f1e15d8cfbce08853c9954ede\") " pod="kube-system/kube-controller-manager-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529258    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e83c4d7f1e15d8cfbce08853c9954ede-usr-share-ca-certificates\") pod \"kube-controller-manager-skaffold-124000\" (UID: \"e83c4d7f1e15d8cfbce08853c9954ede\") " pod="kube-system/kube-controller-manager-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529392    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/247495118e3aa0e015ff01c6b4eb9589-ca-certs\") pod \"kube-apiserver-skaffold-124000\" (UID: \"247495118e3aa0e015ff01c6b4eb9589\") " pod="kube-system/kube-apiserver-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529439    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/247495118e3aa0e015ff01c6b4eb9589-k8s-certs\") pod \"kube-apiserver-skaffold-124000\" (UID: \"247495118e3aa0e015ff01c6b4eb9589\") " pod="kube-system/kube-apiserver-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529490    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/247495118e3aa0e015ff01c6b4eb9589-usr-share-ca-certificates\") pod \"kube-apiserver-skaffold-124000\" (UID: \"247495118e3aa0e015ff01c6b4eb9589\") " pod="kube-system/kube-apiserver-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529522    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e83c4d7f1e15d8cfbce08853c9954ede-etc-ca-certificates\") pod \"kube-controller-manager-skaffold-124000\" (UID: \"e83c4d7f1e15d8cfbce08853c9954ede\") " pod="kube-system/kube-controller-manager-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529551    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e83c4d7f1e15d8cfbce08853c9954ede-kubeconfig\") pod \"kube-controller-manager-skaffold-124000\" (UID: \"e83c4d7f1e15d8cfbce08853c9954ede\") " pod="kube-system/kube-controller-manager-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529605    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e83c4d7f1e15d8cfbce08853c9954ede-usr-local-share-ca-certificates\") pod \"kube-controller-manager-skaffold-124000\" (UID: \"e83c4d7f1e15d8cfbce08853c9954ede\") " pod="kube-system/kube-controller-manager-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529649    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/eaa641b8453179b7fbb1499d58d65788-etcd-certs\") pod \"etcd-skaffold-124000\" (UID: \"eaa641b8453179b7fbb1499d58d65788\") " pod="kube-system/etcd-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529694    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/247495118e3aa0e015ff01c6b4eb9589-etc-ca-certificates\") pod \"kube-apiserver-skaffold-124000\" (UID: \"247495118e3aa0e015ff01c6b4eb9589\") " pod="kube-system/kube-apiserver-skaffold-124000"
	Mar 30 16:12:31 skaffold-124000 kubelet[2320]: I0330 16:12:31.529717    2320 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e83c4d7f1e15d8cfbce08853c9954ede-flexvolume-dir\") pod \"kube-controller-manager-skaffold-124000\" (UID: \"e83c4d7f1e15d8cfbce08853c9954ede\") " pod="kube-system/kube-controller-manager-skaffold-124000"
	Mar 30 16:12:32 skaffold-124000 kubelet[2320]: I0330 16:12:32.263930    2320 apiserver.go:52] "Watching apiserver"
	Mar 30 16:12:32 skaffold-124000 kubelet[2320]: I0330 16:12:32.628354    2320 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 30 16:12:32 skaffold-124000 kubelet[2320]: I0330 16:12:32.640352    2320 reconciler.go:41] "Reconciler: start to sync state"
	Mar 30 16:12:32 skaffold-124000 kubelet[2320]: E0330 16:12:32.909052    2320 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"etcd-skaffold-124000\" already exists" pod="kube-system/etcd-skaffold-124000"
	Mar 30 16:12:33 skaffold-124000 kubelet[2320]: E0330 16:12:33.070316    2320 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-skaffold-124000\" already exists" pod="kube-system/kube-controller-manager-skaffold-124000"
	Mar 30 16:12:33 skaffold-124000 kubelet[2320]: E0330 16:12:33.268960    2320 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-skaffold-124000\" already exists" pod="kube-system/kube-apiserver-skaffold-124000"
	Mar 30 16:12:33 skaffold-124000 kubelet[2320]: I0330 16:12:33.464208    2320 request.go:690] Waited for 1.016006198s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Mar 30 16:12:33 skaffold-124000 kubelet[2320]: E0330 16:12:33.469619    2320 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-skaffold-124000\" already exists" pod="kube-system/kube-scheduler-skaffold-124000"
	Mar 30 16:12:33 skaffold-124000 kubelet[2320]: I0330 16:12:33.670332    2320 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-skaffold-124000" podStartSLOduration=2.670270689 pod.CreationTimestamp="2023-03-30 16:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-30 16:12:33.670155605 +0000 UTC m=+2.538983557" watchObservedRunningTime="2023-03-30 16:12:33.670270689 +0000 UTC m=+2.539098646"
	Mar 30 16:12:34 skaffold-124000 kubelet[2320]: I0330 16:12:34.870557    2320 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-skaffold-124000" podStartSLOduration=3.870530658 pod.CreationTimestamp="2023-03-30 16:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-30 16:12:34.870511607 +0000 UTC m=+3.739339564" watchObservedRunningTime="2023-03-30 16:12:34.870530658 +0000 UTC m=+3.739358609"
	Mar 30 16:12:34 skaffold-124000 kubelet[2320]: I0330 16:12:34.870666    2320 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-skaffold-124000" podStartSLOduration=5.870652489 pod.CreationTimestamp="2023-03-30 16:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-30 16:12:34.468916073 +0000 UTC m=+3.337744024" watchObservedRunningTime="2023-03-30 16:12:34.870652489 +0000 UTC m=+3.739480440"
	Mar 30 16:12:35 skaffold-124000 kubelet[2320]: I0330 16:12:35.269642    2320 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-skaffold-124000" podStartSLOduration=4.269612511 pod.CreationTimestamp="2023-03-30 16:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-03-30 16:12:35.269474982 +0000 UTC m=+4.138302932" watchObservedRunningTime="2023-03-30 16:12:35.269612511 +0000 UTC m=+4.138440459"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p skaffold-124000 -n skaffold-124000
helpers_test.go:261: (dbg) Run:  kubectl --context skaffold-124000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestSkaffold]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context skaffold-124000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context skaffold-124000 describe pod storage-provisioner: exit status 1 (51.433746ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context skaffold-124000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "skaffold-124000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-124000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-124000: (2.639689437s)
--- FAIL: TestSkaffold (43.49s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2485683219.exe start -p running-upgrade-503000 --memory=2200 --vm-driver=docker 
E0330 09:16:12.280680   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:16:17.963600   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2485683219.exe start -p running-upgrade-503000 --memory=2200 --vm-driver=docker : exit status 70 (1m8.263478044s)

                                                
                                                
-- stdout --
	! [running-upgrade-503000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2523593296
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:16:06.387194940 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-503000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:16:25.967194753 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-503000", then "minikube start -p running-upgrade-503000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.59 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.07 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.84 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 23.49 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.89 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 41.07 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 48.29 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.48 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 91.68 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.99 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 129.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 139.54 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 156.65 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 164.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 172.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 182.29 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.40 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 206.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 213.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 223.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 229.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 240.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.49 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.40 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 275.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.85 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 291.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 307.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 314.51 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 324.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 330.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 351.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 368.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 379.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.29 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 415.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 433.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 440.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 456.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 460.74 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 468.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 478.18 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 494.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 501.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 511.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 518.96 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 523.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 531.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:16:25.967194753 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2485683219.exe start -p running-upgrade-503000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2485683219.exe start -p running-upgrade-503000 --memory=2200 --vm-driver=docker : exit status 70 (4.466742751s)

                                                
                                                
-- stdout --
	* [running-upgrade-503000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig4255706939
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-503000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2485683219.exe start -p running-upgrade-503000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.2485683219.exe start -p running-upgrade-503000 --memory=2200 --vm-driver=docker : exit status 70 (4.98706514s)

                                                
                                                
-- stdout --
	* [running-upgrade-503000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3084478580
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-503000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-03-30 09:16:39.594944 -0700 PDT m=+2356.859104770
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-503000
helpers_test.go:235: (dbg) docker inspect running-upgrade-503000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2f00be36112ad08a041af39618da0704bcea8ee818d6da256c881a60b4fbd28",
	        "Created": "2023-03-30T16:16:14.696985865Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 558691,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:16:14.920044597Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/c2f00be36112ad08a041af39618da0704bcea8ee818d6da256c881a60b4fbd28/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2f00be36112ad08a041af39618da0704bcea8ee818d6da256c881a60b4fbd28/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2f00be36112ad08a041af39618da0704bcea8ee818d6da256c881a60b4fbd28/hosts",
	        "LogPath": "/var/lib/docker/containers/c2f00be36112ad08a041af39618da0704bcea8ee818d6da256c881a60b4fbd28/c2f00be36112ad08a041af39618da0704bcea8ee818d6da256c881a60b4fbd28-json.log",
	        "Name": "/running-upgrade-503000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-503000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6ab8c56b56372770fcb8e6f9a49c0b6d155c2486bb9304c74416d24d3a900af0-init/diff:/var/lib/docker/overlay2/cf01fb17109cfe890b79100452271544674c79de1e99e7cb554dd9846dd2dc20/diff:/var/lib/docker/overlay2/2ee96a5e3cd957476bbbb0dedec995768fbfa53b883e890355fe05c4edf51dec/diff:/var/lib/docker/overlay2/5a883bc8f2bfc1bd8a0d79cdf0c589f76f46b3712d9ebadde53b16c358448176/diff:/var/lib/docker/overlay2/b66f255016d8fd6edade780389c232a6b53e24204ade62186925069b2ad55ac0/diff:/var/lib/docker/overlay2/20cec7edb46d540d3c7a50816cd660a7f5b68a539c97bc2f4c5de5d958a7052b/diff:/var/lib/docker/overlay2/eb605471f3c21ba6238e73b8020447e2ecb4554c808c3ba8e9b0e2d4387cb15e/diff:/var/lib/docker/overlay2/01b084f0312a32d2f204e50a20c943943e4df09ae1cf39e2ef13117e221bb8a9/diff:/var/lib/docker/overlay2/021330f16a7ab5a5c536939c8a71616c5da3103a1603c93db60b99224076ab60/diff:/var/lib/docker/overlay2/3f7e0648776bc5e47c8a5d6a5c3e88b721c09be9811331528a8fb97aa9fa51ae/diff:/var/lib/docker/overlay2/e96ef2
541033bc1a9853ec6a5b4b1a4a8f35419ec21c7afdcd994b2a3dd7180a/diff:/var/lib/docker/overlay2/24b61f762b2638958ff42473d8cad19edf2953806250fe230588819922ab61a2/diff:/var/lib/docker/overlay2/7e4b405a358035781bd33e603483d85a8f2be6037719b265ae858066a3e744b3/diff:/var/lib/docker/overlay2/b6d2880761fd066f62c11e70c25b464e0a080787454d2c1a974bba59f76c6bc3/diff:/var/lib/docker/overlay2/68368955348bb279c112e5671f1643ed1cd02b5533983ea062ec2f14deb0e6b4/diff:/var/lib/docker/overlay2/5f3cc28fef90b9acd47130a59b90339683764b990d4d5687433f80548ea47109/diff:/var/lib/docker/overlay2/47ed3869d356b737fc0bf6fb764c60ed1c1677a4dc7bf8c1a8b4d170cb46eb07/diff:/var/lib/docker/overlay2/d7c31f5bb479e33a2ea9ddce92847a1e0b63dfc625149d726be3ac5619355542/diff:/var/lib/docker/overlay2/72696531166a9a0894848960ac635a91e06f1ef9f8132eaf83a048525b06c980/diff:/var/lib/docker/overlay2/4cdc8deeb54bb5c440e9bb3f0914cc3e8ae9dfc04c1226f9d0363962c496919c/diff:/var/lib/docker/overlay2/13247dda6599146b42df1376d0c47d3094006c754e397639d54f68f4bec71990/diff:/var/lib/d
ocker/overlay2/aa5e2788242d209fb8846b24bd1585829693186a5a036f0d36de2b9cc10dcc04/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ab8c56b56372770fcb8e6f9a49c0b6d155c2486bb9304c74416d24d3a900af0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ab8c56b56372770fcb8e6f9a49c0b6d155c2486bb9304c74416d24d3a900af0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ab8c56b56372770fcb8e6f9a49c0b6d155c2486bb9304c74416d24d3a900af0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-503000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-503000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-503000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-503000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-503000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8503d4de8359dcb204695517a98a92d0932ecf81c607f72454fc1ed57d8cd4af",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57248"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57249"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57250"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8503d4de8359",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "113bf008627895ae7beb9f66568fb9d8361eb6bc47da935ec0f463ab1c6d4c12",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "fb5d7a6f98b327fc28f84e153589aa4b81d3f891558b99e2f18277157f8722f9",
	                    "EndpointID": "113bf008627895ae7beb9f66568fb9d8361eb6bc47da935ec0f463ab1c6d4c12",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-503000 -n running-upgrade-503000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-503000 -n running-upgrade-503000: exit status 6 (390.913502ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:16:40.045259   35862 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-503000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-503000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-503000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-503000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-503000: (2.380725942s)
--- FAIL: TestRunningBinaryUpgrade (85.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (379.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.98958117s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-185000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-185000 in cluster kubernetes-upgrade-185000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 09:17:50.212550   36325 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:17:50.212735   36325 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:17:50.212743   36325 out.go:309] Setting ErrFile to fd 2...
	I0330 09:17:50.212747   36325 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:17:50.212875   36325 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:17:50.214457   36325 out.go:303] Setting JSON to false
	I0330 09:17:50.236372   36325 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8238,"bootTime":1680184832,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 09:17:50.236466   36325 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 09:17:50.258277   36325 out.go:177] * [kubernetes-upgrade-185000] minikube v1.29.0 on Darwin 13.3
	I0330 09:17:50.300331   36325 notify.go:220] Checking for updates...
	I0330 09:17:50.321274   36325 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 09:17:50.379191   36325 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:17:50.426217   36325 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 09:17:50.502380   36325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 09:17:50.561406   36325 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 09:17:50.591438   36325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 09:17:50.630462   36325 config.go:182] Loaded profile config "cert-expiration-220000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:17:50.630567   36325 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 09:17:50.701602   36325 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 09:17:50.701737   36325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:17:50.899046   36325 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-30 16:17:50.756164121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:17:50.923718   36325 out.go:177] * Using the docker driver based on user configuration
	I0330 09:17:50.943437   36325 start.go:295] selected driver: docker
	I0330 09:17:50.943447   36325 start.go:859] validating driver "docker" against <nil>
	I0330 09:17:50.943454   36325 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 09:17:50.946407   36325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:17:51.139550   36325 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-30 16:17:51.001146069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:17:51.139663   36325 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0330 09:17:51.139858   36325 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0330 09:17:51.161499   36325 out.go:177] * Using Docker Desktop driver with root privileges
	I0330 09:17:51.182242   36325 cni.go:84] Creating CNI manager for ""
	I0330 09:17:51.182264   36325 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 09:17:51.182277   36325 start_flags.go:319] config:
	{Name:kubernetes-upgrade-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-185000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:17:51.224260   36325 out.go:177] * Starting control plane node kubernetes-upgrade-185000 in cluster kubernetes-upgrade-185000
	I0330 09:17:51.245201   36325 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 09:17:51.266020   36325 out.go:177] * Pulling base image ...
	I0330 09:17:51.287245   36325 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 09:17:51.287248   36325 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 09:17:51.287306   36325 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0330 09:17:51.287316   36325 cache.go:57] Caching tarball of preloaded images
	I0330 09:17:51.287426   36325 preload.go:174] Found /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0330 09:17:51.287438   36325 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0330 09:17:51.288008   36325 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/config.json ...
	I0330 09:17:51.288093   36325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/config.json: {Name:mk78eee10051368d78e89a627b954bd4b8161a7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:17:51.348463   36325 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 09:17:51.348482   36325 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 09:17:51.348502   36325 cache.go:193] Successfully downloaded all kic artifacts
	I0330 09:17:51.348540   36325 start.go:364] acquiring machines lock for kubernetes-upgrade-185000: {Name:mk0fd81fda3374b37e2a514dc21dc34a8c66fd00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 09:17:51.348700   36325 start.go:368] acquired machines lock for "kubernetes-upgrade-185000" in 148.824µs
	I0330 09:17:51.348730   36325 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-185000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 09:17:51.348803   36325 start.go:125] createHost starting for "" (driver="docker")
	I0330 09:17:51.370414   36325 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0330 09:17:51.370630   36325 start.go:159] libmachine.API.Create for "kubernetes-upgrade-185000" (driver="docker")
	I0330 09:17:51.370721   36325 client.go:168] LocalClient.Create starting
	I0330 09:17:51.370908   36325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem
	I0330 09:17:51.371053   36325 main.go:141] libmachine: Decoding PEM data...
	I0330 09:17:51.371092   36325 main.go:141] libmachine: Parsing certificate...
	I0330 09:17:51.371170   36325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem
	I0330 09:17:51.371198   36325 main.go:141] libmachine: Decoding PEM data...
	I0330 09:17:51.371208   36325 main.go:141] libmachine: Parsing certificate...
	I0330 09:17:51.407410   36325 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-185000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0330 09:17:51.471801   36325 cli_runner.go:211] docker network inspect kubernetes-upgrade-185000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0330 09:17:51.471906   36325 network_create.go:281] running [docker network inspect kubernetes-upgrade-185000] to gather additional debugging logs...
	I0330 09:17:51.471927   36325 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-185000
	W0330 09:17:51.533220   36325 cli_runner.go:211] docker network inspect kubernetes-upgrade-185000 returned with exit code 1
	I0330 09:17:51.533248   36325 network_create.go:284] error running [docker network inspect kubernetes-upgrade-185000]: docker network inspect kubernetes-upgrade-185000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-185000
	I0330 09:17:51.533259   36325 network_create.go:286] output of [docker network inspect kubernetes-upgrade-185000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-185000
	
	** /stderr **
	I0330 09:17:51.533343   36325 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0330 09:17:51.602726   36325 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0330 09:17:51.603059   36325 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010030f0}
	I0330 09:17:51.603074   36325 network_create.go:123] attempt to create docker network kubernetes-upgrade-185000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0330 09:17:51.603150   36325 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-185000 kubernetes-upgrade-185000
	W0330 09:17:51.664683   36325 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-185000 kubernetes-upgrade-185000 returned with exit code 1
	W0330 09:17:51.664744   36325 network_create.go:148] failed to create docker network kubernetes-upgrade-185000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-185000 kubernetes-upgrade-185000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0330 09:17:51.664797   36325 network_create.go:115] failed to create docker network kubernetes-upgrade-185000 192.168.58.0/24, will retry: subnet is taken
	I0330 09:17:51.666125   36325 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0330 09:17:51.666525   36325 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001003f30}
	I0330 09:17:51.666538   36325 network_create.go:123] attempt to create docker network kubernetes-upgrade-185000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0330 09:17:51.666605   36325 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-185000 kubernetes-upgrade-185000
	I0330 09:17:51.765171   36325 network_create.go:107] docker network kubernetes-upgrade-185000 192.168.67.0/24 created
	I0330 09:17:51.765221   36325 kic.go:117] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-185000" container
	I0330 09:17:51.765365   36325 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0330 09:17:51.826431   36325 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-185000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-185000 --label created_by.minikube.sigs.k8s.io=true
	I0330 09:17:51.885054   36325 oci.go:103] Successfully created a docker volume kubernetes-upgrade-185000
	I0330 09:17:51.885192   36325 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-185000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-185000 --entrypoint /usr/bin/test -v kubernetes-upgrade-185000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0330 09:17:52.361614   36325 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-185000
	I0330 09:17:52.361648   36325 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 09:17:52.361663   36325 kic.go:190] Starting extracting preloaded images to volume ...
	I0330 09:17:52.361780   36325 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-185000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0330 09:17:58.218350   36325 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-185000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (5.856175065s)
	I0330 09:17:58.218376   36325 kic.go:199] duration metric: took 5.856384 seconds to extract preloaded images to volume
	I0330 09:17:58.218487   36325 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0330 09:17:58.406650   36325 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-185000 --name kubernetes-upgrade-185000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-185000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-185000 --network kubernetes-upgrade-185000 --ip 192.168.67.2 --volume kubernetes-upgrade-185000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0330 09:17:58.789379   36325 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-185000 --format={{.State.Running}}
	I0330 09:17:58.853027   36325 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-185000 --format={{.State.Status}}
	I0330 09:17:58.921677   36325 cli_runner.go:164] Run: docker exec kubernetes-upgrade-185000 stat /var/lib/dpkg/alternatives/iptables
	I0330 09:17:59.037603   36325 oci.go:144] the created container "kubernetes-upgrade-185000" has a running status.
	I0330 09:17:59.037637   36325 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa...
	I0330 09:17:59.492838   36325 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0330 09:17:59.604415   36325 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-185000 --format={{.State.Status}}
	I0330 09:17:59.666334   36325 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0330 09:17:59.666361   36325 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-185000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0330 09:17:59.775539   36325 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-185000 --format={{.State.Status}}
	I0330 09:17:59.836084   36325 machine.go:88] provisioning docker machine ...
	I0330 09:17:59.836140   36325 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-185000"
	I0330 09:17:59.836248   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:17:59.896796   36325 main.go:141] libmachine: Using SSH client type: native
	I0330 09:17:59.897224   36325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57376 <nil> <nil>}
	I0330 09:17:59.897238   36325 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-185000 && echo "kubernetes-upgrade-185000" | sudo tee /etc/hostname
	I0330 09:18:00.025720   36325 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-185000
	
	I0330 09:18:00.025810   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:00.086210   36325 main.go:141] libmachine: Using SSH client type: native
	I0330 09:18:00.086550   36325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57376 <nil> <nil>}
	I0330 09:18:00.086563   36325 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-185000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-185000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-185000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 09:18:00.205417   36325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:18:00.205444   36325 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 09:18:00.205468   36325 ubuntu.go:177] setting up certificates
	I0330 09:18:00.205477   36325 provision.go:83] configureAuth start
	I0330 09:18:00.205562   36325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-185000
	I0330 09:18:00.266480   36325 provision.go:138] copyHostCerts
	I0330 09:18:00.266578   36325 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 09:18:00.266587   36325 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 09:18:00.266695   36325 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 09:18:00.266896   36325 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 09:18:00.266903   36325 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 09:18:00.266969   36325 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 09:18:00.267129   36325 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 09:18:00.267135   36325 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 09:18:00.267197   36325 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 09:18:00.267337   36325 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-185000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-185000]
	I0330 09:18:00.330441   36325 provision.go:172] copyRemoteCerts
	I0330 09:18:00.330499   36325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 09:18:00.330548   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:00.391086   36325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57376 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:18:00.479363   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 09:18:00.496909   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0330 09:18:00.514528   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0330 09:18:00.532279   36325 provision.go:86] duration metric: configureAuth took 326.764861ms
	I0330 09:18:00.532293   36325 ubuntu.go:193] setting minikube options for container-runtime
	I0330 09:18:00.532428   36325 config.go:182] Loaded profile config "kubernetes-upgrade-185000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0330 09:18:00.532493   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:00.592636   36325 main.go:141] libmachine: Using SSH client type: native
	I0330 09:18:00.593004   36325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57376 <nil> <nil>}
	I0330 09:18:00.593022   36325 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 09:18:00.712134   36325 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 09:18:00.712147   36325 ubuntu.go:71] root file system type: overlay
	I0330 09:18:00.712258   36325 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 09:18:00.712355   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:00.773113   36325 main.go:141] libmachine: Using SSH client type: native
	I0330 09:18:00.773453   36325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57376 <nil> <nil>}
	I0330 09:18:00.773502   36325 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 09:18:00.900913   36325 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 09:18:00.901013   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:00.961939   36325 main.go:141] libmachine: Using SSH client type: native
	I0330 09:18:00.962290   36325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57376 <nil> <nil>}
	I0330 09:18:00.962305   36325 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 09:18:01.580715   36325 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:18:00.898443992 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0330 09:18:01.580744   36325 machine.go:91] provisioned docker machine in 1.744500185s
	I0330 09:18:01.580750   36325 client.go:171] LocalClient.Create took 10.209433831s
	I0330 09:18:01.580769   36325 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-185000" took 10.209553548s
	I0330 09:18:01.580780   36325 start.go:300] post-start starting for "kubernetes-upgrade-185000" (driver="docker")
	I0330 09:18:01.580786   36325 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 09:18:01.580870   36325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 09:18:01.580929   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:01.642951   36325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57376 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:18:01.731839   36325 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 09:18:01.735402   36325 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 09:18:01.735419   36325 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 09:18:01.735432   36325 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 09:18:01.735438   36325 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 09:18:01.735453   36325 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 09:18:01.735544   36325 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 09:18:01.735711   36325 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 09:18:01.735904   36325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 09:18:01.743649   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:18:01.761240   36325 start.go:303] post-start completed in 180.427869ms
	I0330 09:18:01.761772   36325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-185000
	I0330 09:18:01.823094   36325 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/config.json ...
	I0330 09:18:01.823548   36325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:18:01.823607   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:01.883753   36325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57376 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:18:01.968148   36325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 09:18:01.972750   36325 start.go:128] duration metric: createHost completed in 10.62332622s
	I0330 09:18:01.972765   36325 start.go:83] releasing machines lock for "kubernetes-upgrade-185000", held for 10.623443736s
	I0330 09:18:01.972852   36325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-185000
	I0330 09:18:02.033809   36325 ssh_runner.go:195] Run: cat /version.json
	I0330 09:18:02.033838   36325 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0330 09:18:02.033876   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:02.033905   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:02.099514   36325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57376 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:18:02.099636   36325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57376 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:18:02.388020   36325 ssh_runner.go:195] Run: systemctl --version
	I0330 09:18:02.392604   36325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0330 09:18:02.397574   36325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0330 09:18:02.417883   36325 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0330 09:18:02.417960   36325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0330 09:18:02.432460   36325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0330 09:18:02.440335   36325 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0330 09:18:02.440354   36325 start.go:481] detecting cgroup driver to use...
	I0330 09:18:02.440368   36325 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:18:02.440442   36325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:18:02.454038   36325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0330 09:18:02.462619   36325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 09:18:02.470880   36325 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 09:18:02.470944   36325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 09:18:02.479637   36325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:18:02.488163   36325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 09:18:02.496668   36325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:18:02.505377   36325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 09:18:02.513332   36325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 09:18:02.522098   36325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 09:18:02.529237   36325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 09:18:02.536368   36325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:18:02.603655   36325 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 09:18:02.673937   36325 start.go:481] detecting cgroup driver to use...
	I0330 09:18:02.673961   36325 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:18:02.674039   36325 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 09:18:02.686949   36325 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 09:18:02.687019   36325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 09:18:02.696979   36325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:18:02.711259   36325 ssh_runner.go:195] Run: which cri-dockerd
	I0330 09:18:02.715600   36325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 09:18:02.731178   36325 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (184 bytes)
	I0330 09:18:02.746229   36325 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 09:18:02.844298   36325 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 09:18:02.905081   36325 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 09:18:02.905099   36325 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 09:18:02.944741   36325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:18:03.017585   36325 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 09:18:03.240123   36325 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:18:03.267593   36325 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:18:03.338944   36325 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0330 09:18:03.339152   36325 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-185000 dig +short host.docker.internal
	I0330 09:18:03.462020   36325 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 09:18:03.462145   36325 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 09:18:03.466687   36325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:18:03.476866   36325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:18:03.537945   36325 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 09:18:03.538035   36325 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:18:03.558954   36325 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0330 09:18:03.558975   36325 docker.go:569] Images already preloaded, skipping extraction
	I0330 09:18:03.559070   36325 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:18:03.578599   36325 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0330 09:18:03.578615   36325 cache_images.go:84] Images are preloaded, skipping loading
	I0330 09:18:03.578702   36325 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 09:18:03.604402   36325 cni.go:84] Creating CNI manager for ""
	I0330 09:18:03.604428   36325 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 09:18:03.604447   36325 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0330 09:18:03.604466   36325 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-185000 NodeName:kubernetes-upgrade-185000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 09:18:03.604579   36325 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-185000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-185000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 09:18:03.604649   36325 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-185000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-185000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0330 09:18:03.604715   36325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0330 09:18:03.613186   36325 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 09:18:03.613249   36325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 09:18:03.620892   36325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0330 09:18:03.633884   36325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0330 09:18:03.647180   36325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0330 09:18:03.660420   36325 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0330 09:18:03.664173   36325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:18:03.674133   36325 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000 for IP: 192.168.67.2
	I0330 09:18:03.674155   36325 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:18:03.674327   36325 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 09:18:03.674387   36325 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 09:18:03.674434   36325 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.key
	I0330 09:18:03.674459   36325 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.crt with IP's: []
	I0330 09:18:03.851480   36325 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.crt ...
	I0330 09:18:03.851499   36325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.crt: {Name:mk5959fcb4158f4d03b087bfcac449c958a840b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:18:03.851774   36325 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.key ...
	I0330 09:18:03.851783   36325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.key: {Name:mk8e415da493f0dd5fc5a04b9ebf6a7f9cc6410d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:18:03.851992   36325 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.key.c7fa3a9e
	I0330 09:18:03.852011   36325 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0330 09:18:03.927402   36325 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.crt.c7fa3a9e ...
	I0330 09:18:03.927410   36325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.crt.c7fa3a9e: {Name:mk43bb4eb5219c52c8806aeb3c9575bf9cd33ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:18:03.927624   36325 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.key.c7fa3a9e ...
	I0330 09:18:03.927632   36325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.key.c7fa3a9e: {Name:mk3911a0585f7b66b8e4cd9f09b2f36bb9855726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:18:03.927823   36325 certs.go:333] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.crt
	I0330 09:18:03.928003   36325 certs.go:337] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.key
	I0330 09:18:03.928177   36325 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.key
	I0330 09:18:03.928191   36325 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.crt with IP's: []
	I0330 09:18:04.021234   36325 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.crt ...
	I0330 09:18:04.021243   36325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.crt: {Name:mkd9f8fcfe20a9fd75eafe78f39af74f063dd950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:18:04.021460   36325 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.key ...
	I0330 09:18:04.021468   36325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.key: {Name:mkdf384011a6763a88463f6ad91dbfeca974e5ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:18:04.021869   36325 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 09:18:04.021919   36325 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 09:18:04.021934   36325 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 09:18:04.021969   36325 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 09:18:04.022000   36325 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 09:18:04.022031   36325 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 09:18:04.022102   36325 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:18:04.022662   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 09:18:04.041567   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0330 09:18:04.059482   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 09:18:04.076981   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0330 09:18:04.094433   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 09:18:04.111890   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 09:18:04.129639   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 09:18:04.147035   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 09:18:04.164672   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 09:18:04.182449   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 09:18:04.199953   36325 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 09:18:04.217458   36325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 09:18:04.230847   36325 ssh_runner.go:195] Run: openssl version
	I0330 09:18:04.236861   36325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 09:18:04.246366   36325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:18:04.251204   36325 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:18:04.251305   36325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:18:04.258238   36325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 09:18:04.267596   36325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 09:18:04.276612   36325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 09:18:04.281459   36325 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 09:18:04.281523   36325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 09:18:04.287543   36325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 09:18:04.295769   36325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 09:18:04.304069   36325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 09:18:04.308365   36325 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 09:18:04.308412   36325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 09:18:04.314071   36325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 09:18:04.322241   36325 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-185000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP:}
	I0330 09:18:04.322339   36325 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:18:04.341126   36325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 09:18:04.349249   36325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:18:04.356730   36325 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:18:04.356785   36325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:18:04.364537   36325 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:18:04.364563   36325 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:18:04.413022   36325 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0330 09:18:04.413583   36325 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:18:04.590730   36325 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:18:04.590833   36325 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:18:04.590909   36325 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:18:04.763756   36325 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:18:04.765719   36325 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:18:04.773250   36325 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0330 09:18:04.851067   36325 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:18:04.875297   36325 out.go:204]   - Generating certificates and keys ...
	I0330 09:18:04.875400   36325 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:18:04.875467   36325 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:18:05.032445   36325 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0330 09:18:05.314391   36325 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0330 09:18:05.661021   36325 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0330 09:18:05.884692   36325 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0330 09:18:06.044069   36325 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0330 09:18:06.044217   36325 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-185000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0330 09:18:06.127932   36325 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0330 09:18:06.128114   36325 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-185000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0330 09:18:06.253722   36325 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0330 09:18:06.448971   36325 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0330 09:18:06.540315   36325 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0330 09:18:06.540371   36325 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:18:06.713402   36325 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:18:06.783000   36325 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:18:06.901246   36325 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:18:07.049270   36325 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:18:07.050140   36325 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:18:07.071595   36325 out.go:204]   - Booting up control plane ...
	I0330 09:18:07.071737   36325 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:18:07.071816   36325 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:18:07.071890   36325 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:18:07.072003   36325 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:18:07.072211   36325 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:18:47.060074   36325 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 09:18:47.061092   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:18:47.061306   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:18:52.062783   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:18:52.063022   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:19:02.063588   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:19:02.063865   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:19:22.065969   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:19:22.066191   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:20:02.066651   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:20:02.066904   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:20:02.066921   36325 kubeadm.go:322] 
	I0330 09:20:02.066987   36325 kubeadm.go:322] Unfortunately, an error has occurred:
	I0330 09:20:02.067062   36325 kubeadm.go:322] 	timed out waiting for the condition
	I0330 09:20:02.067101   36325 kubeadm.go:322] 
	I0330 09:20:02.067169   36325 kubeadm.go:322] This error is likely caused by:
	I0330 09:20:02.067196   36325 kubeadm.go:322] 	- The kubelet is not running
	I0330 09:20:02.067319   36325 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 09:20:02.067327   36325 kubeadm.go:322] 
	I0330 09:20:02.067422   36325 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 09:20:02.067449   36325 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0330 09:20:02.067471   36325 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0330 09:20:02.067478   36325 kubeadm.go:322] 
	I0330 09:20:02.067568   36325 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 09:20:02.067643   36325 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0330 09:20:02.067725   36325 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0330 09:20:02.067815   36325 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0330 09:20:02.067882   36325 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0330 09:20:02.067910   36325 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0330 09:20:02.070413   36325 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 09:20:02.070532   36325 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 09:20:02.070663   36325 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0330 09:20:02.070769   36325 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:20:02.070854   36325 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 09:20:02.070922   36325 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0330 09:20:02.071097   36325 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-185000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-185000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-185000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-185000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0330 09:20:02.071140   36325 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0330 09:20:02.484565   36325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:20:02.494881   36325 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:20:02.494948   36325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:20:02.503561   36325 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:20:02.503590   36325 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:20:02.552670   36325 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0330 09:20:02.552716   36325 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:20:02.757497   36325 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:20:02.757627   36325 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:20:02.757766   36325 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:20:02.941488   36325 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:20:02.942685   36325 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:20:02.950459   36325 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0330 09:20:03.024443   36325 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:20:03.048935   36325 out.go:204]   - Generating certificates and keys ...
	I0330 09:20:03.049041   36325 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:20:03.049120   36325 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:20:03.049238   36325 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 09:20:03.049430   36325 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 09:20:03.049540   36325 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 09:20:03.049628   36325 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 09:20:03.049718   36325 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 09:20:03.049789   36325 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 09:20:03.049868   36325 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 09:20:03.049935   36325 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 09:20:03.049976   36325 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 09:20:03.050043   36325 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:20:03.165626   36325 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:20:03.437012   36325 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:20:03.489141   36325 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:20:03.617463   36325 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:20:03.618335   36325 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:20:03.640081   36325 out.go:204]   - Booting up control plane ...
	I0330 09:20:03.640182   36325 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:20:03.640274   36325 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:20:03.640331   36325 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:20:03.640408   36325 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:20:03.640548   36325 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:20:43.626963   36325 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 09:20:43.627805   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:20:43.627945   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:20:48.629755   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:20:48.629985   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:20:58.630267   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:20:58.630724   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:21:18.631672   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:21:18.631884   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:21:58.632865   36325 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:21:58.633032   36325 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:21:58.633040   36325 kubeadm.go:322] 
	I0330 09:21:58.633078   36325 kubeadm.go:322] Unfortunately, an error has occurred:
	I0330 09:21:58.633108   36325 kubeadm.go:322] 	timed out waiting for the condition
	I0330 09:21:58.633115   36325 kubeadm.go:322] 
	I0330 09:21:58.633137   36325 kubeadm.go:322] This error is likely caused by:
	I0330 09:21:58.633168   36325 kubeadm.go:322] 	- The kubelet is not running
	I0330 09:21:58.633295   36325 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 09:21:58.633302   36325 kubeadm.go:322] 
	I0330 09:21:58.633395   36325 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 09:21:58.633422   36325 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0330 09:21:58.633449   36325 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0330 09:21:58.633453   36325 kubeadm.go:322] 
	I0330 09:21:58.633551   36325 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 09:21:58.633618   36325 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0330 09:21:58.633679   36325 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0330 09:21:58.633754   36325 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0330 09:21:58.633861   36325 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0330 09:21:58.633922   36325 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0330 09:21:58.636710   36325 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 09:21:58.636829   36325 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 09:21:58.636934   36325 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0330 09:21:58.637010   36325 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:21:58.637081   36325 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 09:21:58.637137   36325 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0330 09:21:58.637156   36325 kubeadm.go:403] StartCluster complete in 3m54.312390776s
	I0330 09:21:58.637251   36325 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:21:58.660591   36325 logs.go:277] 0 containers: []
	W0330 09:21:58.660603   36325 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:21:58.660684   36325 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:21:58.683634   36325 logs.go:277] 0 containers: []
	W0330 09:21:58.683646   36325 logs.go:279] No container was found matching "etcd"
	I0330 09:21:58.683709   36325 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:21:58.704244   36325 logs.go:277] 0 containers: []
	W0330 09:21:58.704257   36325 logs.go:279] No container was found matching "coredns"
	I0330 09:21:58.704346   36325 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:21:58.725556   36325 logs.go:277] 0 containers: []
	W0330 09:21:58.725569   36325 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:21:58.725631   36325 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:21:58.746385   36325 logs.go:277] 0 containers: []
	W0330 09:21:58.746399   36325 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:21:58.746459   36325 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:21:58.770740   36325 logs.go:277] 0 containers: []
	W0330 09:21:58.770751   36325 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:21:58.770853   36325 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:21:58.790707   36325 logs.go:277] 0 containers: []
	W0330 09:21:58.790721   36325 logs.go:279] No container was found matching "kindnet"
	I0330 09:21:58.790731   36325 logs.go:123] Gathering logs for kubelet ...
	I0330 09:21:58.790738   36325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:21:58.832593   36325 logs.go:123] Gathering logs for dmesg ...
	I0330 09:21:58.832608   36325 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:21:58.847118   36325 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:21:58.847132   36325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:21:58.907924   36325 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:21:58.907938   36325 logs.go:123] Gathering logs for Docker ...
	I0330 09:21:58.907945   36325 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:21:58.935920   36325 logs.go:123] Gathering logs for container status ...
	I0330 09:21:58.935941   36325 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:22:00.991122   36325 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05515361s)
	W0330 09:22:00.991251   36325 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0330 09:22:00.991270   36325 out.go:239] * 
	* 
	W0330 09:22:00.991382   36325 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 09:22:00.991406   36325 out.go:239] * 
	* 
	W0330 09:22:00.992117   36325 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0330 09:22:01.054565   36325 out.go:177] 
	W0330 09:22:01.096797   36325 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 09:22:01.096866   36325 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0330 09:22:01.096898   36325 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0330 09:22:01.117606   36325 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-185000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-185000: (1.677709766s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-185000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-185000 status --format={{.Host}}: exit status 7 (130.617255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=docker : (1m41.264566627s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-185000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (475.062464ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-185000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-185000
	    minikube start -p kubernetes-upgrade-185000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1850002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-185000 --kubernetes-version=v1.27.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-185000 --memory=2200 --kubernetes-version=v1.27.0-rc.0 --alsologtostderr -v=1 --driver=docker : (18.395730969s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-03-30 09:24:03.196741 -0700 PDT m=+2800.456151988
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-185000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-185000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2235be4b37f64bad062da15a888a24682c92e80af2c00cddbe6f90b39bb989c3",
	        "Created": "2023-03-30T16:17:58.470967399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 589800,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:22:04.672127119Z",
	            "FinishedAt": "2023-03-30T16:22:01.714078279Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/2235be4b37f64bad062da15a888a24682c92e80af2c00cddbe6f90b39bb989c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2235be4b37f64bad062da15a888a24682c92e80af2c00cddbe6f90b39bb989c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/2235be4b37f64bad062da15a888a24682c92e80af2c00cddbe6f90b39bb989c3/hosts",
	        "LogPath": "/var/lib/docker/containers/2235be4b37f64bad062da15a888a24682c92e80af2c00cddbe6f90b39bb989c3/2235be4b37f64bad062da15a888a24682c92e80af2c00cddbe6f90b39bb989c3-json.log",
	        "Name": "/kubernetes-upgrade-185000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-185000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-185000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b1c229996256c8867070e02958405a3839cedf5204828aacd1ce864a7f527de0-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b1c229996256c8867070e02958405a3839cedf5204828aacd1ce864a7f527de0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b1c229996256c8867070e02958405a3839cedf5204828aacd1ce864a7f527de0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b1c229996256c8867070e02958405a3839cedf5204828aacd1ce864a7f527de0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-185000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-185000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-185000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-185000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-185000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "115a20f77214bd21e24c1209548aa787126135e2c26e1088a9f8a747191dabd4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57636"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57637"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57638"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57639"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57640"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/115a20f77214",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-185000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2235be4b37f6",
	                        "kubernetes-upgrade-185000"
	                    ],
	                    "NetworkID": "a3055be77d240096742c5513dc5f4c591fb8e2f0aa2737b1340741d80b157c66",
	                    "EndpointID": "850f8ef69e1f285e90d649a90cadfc0779553317b4d8fba49a09ab77d5a85dfa",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-185000 -n kubernetes-upgrade-185000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-185000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-185000 logs -n 25: (2.371489903s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-378000 sudo journalctl                       | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | -xeu kubelet --all --full                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo cat                              | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo cat                              | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo systemctl                        | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo systemctl                        | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo cat                              | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo docker                           | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo systemctl                        | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo systemctl                        | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo cat                              | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo cat                              | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo                                  | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo systemctl                        | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo systemctl                        | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo cat                              | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo cat                              | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo containerd                       | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo systemctl                        | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT |                     |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo systemctl                        | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo find                             | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-378000 sudo crio                             | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-378000                                       | auto-378000               | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:23 PDT |
	| start   | -p kindnet-378000                                    | kindnet-378000            | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker                        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-185000                         | kubernetes-upgrade-185000 | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-185000                         | kubernetes-upgrade-185000 | jenkins | v1.29.0 | 30 Mar 23 09:23 PDT | 30 Mar 23 09:24 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.0-rc.0                    |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/30 09:23:44
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0330 09:23:44.840313   38578 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:23:44.840475   38578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:23:44.840482   38578 out.go:309] Setting ErrFile to fd 2...
	I0330 09:23:44.840486   38578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:23:44.840605   38578 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:23:44.841948   38578 out.go:303] Setting JSON to false
	I0330 09:23:44.862105   38578 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8592,"bootTime":1680184832,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 09:23:44.862196   38578 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 09:23:44.883399   38578 out.go:177] * [kubernetes-upgrade-185000] minikube v1.29.0 on Darwin 13.3
	I0330 09:23:44.904379   38578 notify.go:220] Checking for updates...
	I0330 09:23:44.941363   38578 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 09:23:44.983425   38578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:23:45.004358   38578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 09:23:45.025386   38578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 09:23:45.046401   38578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 09:23:45.067406   38578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 09:23:45.088788   38578 config.go:182] Loaded profile config "kubernetes-upgrade-185000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0330 09:23:45.089204   38578 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 09:23:45.155740   38578 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 09:23:45.155890   38578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:23:45.345494   38578 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:60 SystemTime:2023-03-30 16:23:45.210075185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:23:45.367432   38578 out.go:177] * Using the docker driver based on existing profile
	I0330 09:23:45.388974   38578 start.go:295] selected driver: docker
	I0330 09:23:45.389005   38578 start.go:859] validating driver "docker" against &{Name:kubernetes-upgrade-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:kubernetes-upgrade-185000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:23:45.389158   38578 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 09:23:45.393147   38578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:23:45.591819   38578 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:60 SystemTime:2023-03-30 16:23:45.447180934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:23:45.591972   38578 cni.go:84] Creating CNI manager for ""
	I0330 09:23:45.591987   38578 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:23:45.592001   38578 start_flags.go:319] config:
	{Name:kubernetes-upgrade-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:kubernetes-upgrade-185000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Sock
etVMnetPath: StaticIP:}
	I0330 09:23:45.634339   38578 out.go:177] * Starting control plane node kubernetes-upgrade-185000 in cluster kubernetes-upgrade-185000
	I0330 09:23:45.655428   38578 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 09:23:45.677209   38578 out.go:177] * Pulling base image ...
	I0330 09:23:45.719294   38578 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0330 09:23:45.719317   38578 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 09:23:45.719370   38578 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0330 09:23:45.719388   38578 cache.go:57] Caching tarball of preloaded images
	I0330 09:23:45.719541   38578 preload.go:174] Found /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0330 09:23:45.719556   38578 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.0-rc.0 on docker
	I0330 09:23:45.720295   38578 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/config.json ...
	I0330 09:23:45.778981   38578 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 09:23:45.779003   38578 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 09:23:45.779024   38578 cache.go:193] Successfully downloaded all kic artifacts
	I0330 09:23:45.779075   38578 start.go:364] acquiring machines lock for kubernetes-upgrade-185000: {Name:mk0fd81fda3374b37e2a514dc21dc34a8c66fd00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 09:23:45.779169   38578 start.go:368] acquired machines lock for "kubernetes-upgrade-185000" in 75.085µs
	I0330 09:23:45.779193   38578 start.go:96] Skipping create...Using existing machine configuration
	I0330 09:23:45.779203   38578 fix.go:55] fixHost starting: 
	I0330 09:23:45.779418   38578 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-185000 --format={{.State.Status}}
	I0330 09:23:45.840407   38578 fix.go:103] recreateIfNeeded on kubernetes-upgrade-185000: state=Running err=<nil>
	W0330 09:23:45.840435   38578 fix.go:129] unexpected machine state, will restart: <nil>
	I0330 09:23:45.862101   38578 out.go:177] * Updating the running docker "kubernetes-upgrade-185000" container ...
	I0330 09:23:43.814748   38435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0330 09:23:43.814839   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:43.814842   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=e1b28cf61afe27b0a5598da1ee43bf06463b8063 minikube.k8s.io/name=kindnet-378000 minikube.k8s.io/updated_at=2023_03_30T09_23_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:43.895245   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:43.923716   38435 ops.go:34] apiserver oom_adj: -16
	I0330 09:23:44.489899   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:44.989320   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:45.489214   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:45.989130   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:46.489246   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:46.989567   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:47.489270   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:47.989289   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:48.490264   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:45.882682   38578 machine.go:88] provisioning docker machine ...
	I0330 09:23:45.882717   38578 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-185000"
	I0330 09:23:45.882840   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:45.946518   38578 main.go:141] libmachine: Using SSH client type: native
	I0330 09:23:45.946912   38578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57636 <nil> <nil>}
	I0330 09:23:45.946925   38578 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-185000 && echo "kubernetes-upgrade-185000" | sudo tee /etc/hostname
	I0330 09:23:46.072001   38578 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-185000
	
	I0330 09:23:46.072108   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:46.134484   38578 main.go:141] libmachine: Using SSH client type: native
	I0330 09:23:46.134823   38578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57636 <nil> <nil>}
	I0330 09:23:46.134842   38578 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-185000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-185000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-185000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 09:23:46.251903   38578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:23:46.251924   38578 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 09:23:46.251943   38578 ubuntu.go:177] setting up certificates
	I0330 09:23:46.251956   38578 provision.go:83] configureAuth start
	I0330 09:23:46.252058   38578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-185000
	I0330 09:23:46.314305   38578 provision.go:138] copyHostCerts
	I0330 09:23:46.314396   38578 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 09:23:46.314408   38578 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 09:23:46.314515   38578 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 09:23:46.315332   38578 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 09:23:46.315343   38578 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 09:23:46.315457   38578 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 09:23:46.315993   38578 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 09:23:46.316006   38578 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 09:23:46.316074   38578 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 09:23:46.316218   38578 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-185000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-185000]
	I0330 09:23:46.419447   38578 provision.go:172] copyRemoteCerts
	I0330 09:23:46.419508   38578 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 09:23:46.419562   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:46.480192   38578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57636 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:23:46.566516   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 09:23:46.584728   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0330 09:23:46.602511   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0330 09:23:46.620668   38578 provision.go:86] duration metric: configureAuth took 368.693998ms
	I0330 09:23:46.620682   38578 ubuntu.go:193] setting minikube options for container-runtime
	I0330 09:23:46.620825   38578 config.go:182] Loaded profile config "kubernetes-upgrade-185000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0330 09:23:46.620890   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:46.683240   38578 main.go:141] libmachine: Using SSH client type: native
	I0330 09:23:46.683584   38578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57636 <nil> <nil>}
	I0330 09:23:46.683595   38578 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 09:23:46.802740   38578 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 09:23:46.802752   38578 ubuntu.go:71] root file system type: overlay
	I0330 09:23:46.802852   38578 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 09:23:46.802937   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:46.865371   38578 main.go:141] libmachine: Using SSH client type: native
	I0330 09:23:46.865711   38578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57636 <nil> <nil>}
	I0330 09:23:46.865759   38578 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 09:23:46.992225   38578 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 09:23:46.992342   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:47.060178   38578 main.go:141] libmachine: Using SSH client type: native
	I0330 09:23:47.060695   38578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 57636 <nil> <nil>}
	I0330 09:23:47.060727   38578 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 09:23:47.182597   38578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:23:47.182613   38578 machine.go:91] provisioned docker machine in 1.299909839s
	I0330 09:23:47.182624   38578 start.go:300] post-start starting for "kubernetes-upgrade-185000" (driver="docker")
	I0330 09:23:47.182630   38578 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 09:23:47.182698   38578 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 09:23:47.182758   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:47.244783   38578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57636 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:23:47.332481   38578 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 09:23:47.336249   38578 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 09:23:47.336268   38578 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 09:23:47.336275   38578 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 09:23:47.336280   38578 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 09:23:47.336288   38578 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 09:23:47.336387   38578 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 09:23:47.336535   38578 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 09:23:47.336701   38578 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 09:23:47.344564   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:23:47.362531   38578 start.go:303] post-start completed in 179.894059ms
	I0330 09:23:47.362611   38578 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:23:47.362688   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:47.424771   38578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57636 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:23:47.508242   38578 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 09:23:47.513247   38578 fix.go:57] fixHost completed within 1.734024216s
	I0330 09:23:47.513267   38578 start.go:83] releasing machines lock for "kubernetes-upgrade-185000", held for 1.734078066s
	I0330 09:23:47.513364   38578 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-185000
	I0330 09:23:47.577524   38578 ssh_runner.go:195] Run: cat /version.json
	I0330 09:23:47.577532   38578 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0330 09:23:47.577601   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:47.577616   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:47.650215   38578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57636 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:23:47.650371   38578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57636 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:23:47.788742   38578 ssh_runner.go:195] Run: systemctl --version
	I0330 09:23:47.793632   38578 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0330 09:23:47.798579   38578 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0330 09:23:47.798634   38578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0330 09:23:47.806847   38578 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0330 09:23:47.814566   38578 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0330 09:23:47.814580   38578 start.go:481] detecting cgroup driver to use...
	I0330 09:23:47.814591   38578 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:23:47.814675   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:23:47.828799   38578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0330 09:23:47.837778   38578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 09:23:47.846726   38578 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 09:23:47.846784   38578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 09:23:47.855613   38578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:23:47.864566   38578 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 09:23:47.873780   38578 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:23:47.882907   38578 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 09:23:47.891097   38578 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 09:23:47.899728   38578 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 09:23:47.907219   38578 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 09:23:47.914534   38578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:23:48.001093   38578 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 09:23:49.716202   38578 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (1.715076472s)
	I0330 09:23:49.716218   38578 start.go:481] detecting cgroup driver to use...
	I0330 09:23:49.716229   38578 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:23:49.716307   38578 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 09:23:49.730516   38578 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 09:23:49.730591   38578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 09:23:49.741691   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:23:49.757129   38578 ssh_runner.go:195] Run: which cri-dockerd
	I0330 09:23:49.761374   38578 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 09:23:49.769883   38578 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0330 09:23:49.785973   38578 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 09:23:49.893168   38578 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 09:23:49.977034   38578 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 09:23:49.977053   38578 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 09:23:49.991557   38578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:23:50.094961   38578 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 09:23:50.623289   38578 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:23:50.751246   38578 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0330 09:23:50.874707   38578 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:23:51.068252   38578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:23:51.258188   38578 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0330 09:23:51.337827   38578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:23:51.476324   38578 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0330 09:23:51.651188   38578 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0330 09:23:51.651295   38578 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0330 09:23:51.656344   38578 start.go:549] Will wait 60s for crictl version
	I0330 09:23:51.656397   38578 ssh_runner.go:195] Run: which crictl
	I0330 09:23:51.660897   38578 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0330 09:23:51.747397   38578 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0330 09:23:51.747488   38578 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:23:51.777988   38578 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:23:48.989798   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:49.491304   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:49.989236   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:50.489747   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:50.989689   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:51.489340   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:51.989198   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:52.489317   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:52.990234   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:53.489458   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:51.868217   38578 out.go:204] * Preparing Kubernetes v1.27.0-rc.0 on Docker 23.0.1 ...
	I0330 09:23:51.868312   38578 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-185000 dig +short host.docker.internal
	I0330 09:23:51.988231   38578 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 09:23:51.988366   38578 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 09:23:51.993193   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:52.059125   38578 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0330 09:23:52.059210   38578 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:23:52.080566   38578 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.0-rc.0
	registry.k8s.io/kube-controller-manager:v1.27.0-rc.0
	registry.k8s.io/kube-proxy:v1.27.0-rc.0
	registry.k8s.io/kube-scheduler:v1.27.0-rc.0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0330 09:23:52.080588   38578 docker.go:569] Images already preloaded, skipping extraction
	I0330 09:23:52.080678   38578 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:23:52.102665   38578 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.0-rc.0
	registry.k8s.io/kube-proxy:v1.27.0-rc.0
	registry.k8s.io/kube-scheduler:v1.27.0-rc.0
	registry.k8s.io/kube-controller-manager:v1.27.0-rc.0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0330 09:23:52.102685   38578 cache_images.go:84] Images are preloaded, skipping loading
	I0330 09:23:52.102802   38578 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 09:23:52.129881   38578 cni.go:84] Creating CNI manager for ""
	I0330 09:23:52.129904   38578 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:23:52.129919   38578 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0330 09:23:52.129945   38578 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-185000 NodeName:kubernetes-upgrade-185000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 09:23:52.130051   38578 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-185000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 09:23:52.130127   38578 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-185000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.0-rc.0 ClusterName:kubernetes-upgrade-185000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0330 09:23:52.130192   38578 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.0-rc.0
	I0330 09:23:52.138459   38578 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 09:23:52.138541   38578 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 09:23:52.146260   38578 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I0330 09:23:52.160423   38578 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0330 09:23:52.174025   38578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0330 09:23:52.187741   38578 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0330 09:23:52.192085   38578 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000 for IP: 192.168.67.2
	I0330 09:23:52.192101   38578 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:23:52.192261   38578 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 09:23:52.192314   38578 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 09:23:52.192403   38578 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.key
	I0330 09:23:52.192473   38578 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.key.c7fa3a9e
	I0330 09:23:52.192535   38578 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.key
	I0330 09:23:52.192740   38578 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 09:23:52.192785   38578 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 09:23:52.192797   38578 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 09:23:52.192831   38578 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 09:23:52.192861   38578 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 09:23:52.192900   38578 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 09:23:52.192971   38578 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:23:52.193544   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 09:23:52.211563   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0330 09:23:52.228996   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 09:23:52.246991   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0330 09:23:52.264817   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 09:23:52.282959   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 09:23:52.300759   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 09:23:52.320116   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 09:23:52.340275   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 09:23:52.363313   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 09:23:52.389912   38578 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 09:23:52.418611   38578 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 09:23:52.450157   38578 ssh_runner.go:195] Run: openssl version
	I0330 09:23:52.456859   38578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 09:23:52.466782   38578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 09:23:52.472567   38578 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 09:23:52.472634   38578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 09:23:52.479160   38578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 09:23:52.488497   38578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 09:23:52.499677   38578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:23:52.534961   38578 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:23:52.535036   38578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:23:52.541729   38578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 09:23:52.554796   38578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 09:23:52.568504   38578 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 09:23:52.573374   38578 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 09:23:52.573445   38578 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 09:23:52.580476   38578 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 09:23:52.589658   38578 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-185000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:kubernetes-upgrade-185000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:23:52.589795   38578 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:23:52.637957   38578 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 09:23:52.646853   38578 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0330 09:23:52.646879   38578 kubeadm.go:633] restartCluster start
	I0330 09:23:52.646960   38578 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0330 09:23:52.655810   38578 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:23:52.655916   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:23:52.724705   38578 kubeconfig.go:92] found "kubernetes-upgrade-185000" server: "https://127.0.0.1:57640"
	I0330 09:23:52.725322   38578 kapi.go:59] client config for kubernetes-upgrade-185000: &rest.Config{Host:"https://127.0.0.1:57640", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.key", CAFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24f0420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0330 09:23:52.726093   38578 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0330 09:23:52.742291   38578 api_server.go:165] Checking apiserver status ...
	I0330 09:23:52.742355   38578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:23:52.752643   38578 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/13466/cgroup
	W0330 09:23:52.763025   38578 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/13466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:23:52.763106   38578 ssh_runner.go:195] Run: ls
	I0330 09:23:52.768041   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:23:53.414831   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0330 09:23:53.414933   38578 retry.go:31] will retry after 201.563504ms: https://127.0.0.1:57640/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0330 09:23:53.616881   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:23:53.622963   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:23:53.622987   38578 retry.go:31] will retry after 342.750143ms: https://127.0.0.1:57640/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:23:53.966106   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:23:53.972357   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:23:53.972376   38578 retry.go:31] will retry after 439.309654ms: https://127.0.0.1:57640/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:23:54.411809   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:23:54.417191   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:23:54.417213   38578 retry.go:31] will retry after 428.885519ms: https://127.0.0.1:57640/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:23:53.990363   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:54.489486   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:54.990228   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:55.489389   38435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:23:55.553212   38435 kubeadm.go:1073] duration metric: took 11.738358372s to wait for elevateKubeSystemPrivileges.
	I0330 09:23:55.553235   38435 kubeadm.go:403] StartCluster complete in 21.524850648s
	I0330 09:23:55.553257   38435 settings.go:142] acquiring lock: {Name:mkee06510b0682aea765fc9cbf62cdda0355bccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:23:55.553352   38435 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:23:55.554049   38435 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:23:55.554288   38435 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0330 09:23:55.554317   38435 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0330 09:23:55.554382   38435 addons.go:66] Setting storage-provisioner=true in profile "kindnet-378000"
	I0330 09:23:55.554388   38435 addons.go:66] Setting default-storageclass=true in profile "kindnet-378000"
	I0330 09:23:55.554397   38435 addons.go:228] Setting addon storage-provisioner=true in "kindnet-378000"
	I0330 09:23:55.554405   38435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-378000"
	I0330 09:23:55.554428   38435 config.go:182] Loaded profile config "kindnet-378000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:23:55.554431   38435 host.go:66] Checking if "kindnet-378000" exists ...
	I0330 09:23:55.554657   38435 cli_runner.go:164] Run: docker container inspect kindnet-378000 --format={{.State.Status}}
	I0330 09:23:55.554815   38435 cli_runner.go:164] Run: docker container inspect kindnet-378000 --format={{.State.Status}}
	I0330 09:23:55.673939   38435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0330 09:23:55.711201   38435 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:23:55.711216   38435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0330 09:23:55.711298   38435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-378000
	I0330 09:23:55.713638   38435 addons.go:228] Setting addon default-storageclass=true in "kindnet-378000"
	I0330 09:23:55.713673   38435 host.go:66] Checking if "kindnet-378000" exists ...
	I0330 09:23:55.714022   38435 cli_runner.go:164] Run: docker container inspect kindnet-378000 --format={{.State.Status}}
	I0330 09:23:55.721035   38435 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0330 09:23:55.799307   38435 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0330 09:23:55.799321   38435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0330 09:23:55.799408   38435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-378000
	I0330 09:23:55.801290   38435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57822 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kindnet-378000/id_rsa Username:docker}
	I0330 09:23:55.884854   38435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57822 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kindnet-378000/id_rsa Username:docker}
	I0330 09:23:55.961228   38435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:23:56.050036   38435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0330 09:23:56.078876   38435 start.go:917] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0330 09:23:56.089040   38435 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-378000" context rescaled to 1 replicas
	I0330 09:23:56.089072   38435 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 09:23:56.129054   38435 out.go:177] * Verifying Kubernetes components...
	I0330 09:23:56.166934   38435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:23:56.463142   38435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-378000
	I0330 09:23:56.487410   38435 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0330 09:23:56.507926   38435 addons.go:499] enable addons completed in 953.569088ms: enabled=[storage-provisioner default-storageclass]
	I0330 09:23:56.559073   38435 node_ready.go:35] waiting up to 15m0s for node "kindnet-378000" to be "Ready" ...
	I0330 09:23:56.563769   38435 node_ready.go:49] node "kindnet-378000" has status "Ready":"True"
	I0330 09:23:56.563780   38435 node_ready.go:38] duration metric: took 4.683847ms waiting for node "kindnet-378000" to be "Ready" ...
	I0330 09:23:56.563786   38435 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0330 09:23:56.574730   38435 pod_ready.go:78] waiting up to 15m0s for pod "coredns-787d4945fb-5nkxs" in "kube-system" namespace to be "Ready" ...
	I0330 09:23:58.590254   38435 pod_ready.go:102] pod "coredns-787d4945fb-5nkxs" in "kube-system" namespace has status "Ready":"False"
	I0330 09:23:54.846686   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:23:54.870291   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 200:
	ok
	I0330 09:23:54.883601   38578 system_pods.go:86] 5 kube-system pods found
	I0330 09:23:54.883618   38578 system_pods.go:89] "etcd-kubernetes-upgrade-185000" [80464f7c-1133-49a5-bae8-8914a8404e8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0330 09:23:54.883625   38578 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-185000" [be9095f4-3c40-4d93-906b-f661614a50fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0330 09:23:54.883633   38578 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-185000" [d734920b-c12d-4fde-84a4-822eca8fe29b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0330 09:23:54.883640   38578 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-185000" [28d1e6ff-4b71-4af2-be82-5c3a130c5d72] Pending
	I0330 09:23:54.883645   38578 system_pods.go:89] "storage-provisioner" [c612b261-a6e5-476c-ae6f-610dc5770bce] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0330 09:23:54.883651   38578 kubeadm.go:617] needs reconfigure: missing components: kube-dns, kube-proxy, kube-scheduler
	I0330 09:23:54.883658   38578 kubeadm.go:1120] stopping kube-system containers ...
	I0330 09:23:54.883729   38578 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:23:54.906510   38578 docker.go:465] Stopping containers: [2c299d1fedb1 3738b13d703d 958a73a1ac4a 9934f7a19853 f5f6103c54f8 adf9b140b0ef 88ee712aafb8 307ce6709b46 cb14f46b5bff 6b4ea2ac78bd f6213210269d 10f4c300b06d 416ad712fdc1 cfa72c3c9f17 1a0b28637b5f 516a4f51e199 c5a9fa7c8517]
	I0330 09:23:54.906632   38578 ssh_runner.go:195] Run: docker stop 2c299d1fedb1 3738b13d703d 958a73a1ac4a 9934f7a19853 f5f6103c54f8 adf9b140b0ef 88ee712aafb8 307ce6709b46 cb14f46b5bff 6b4ea2ac78bd f6213210269d 10f4c300b06d 416ad712fdc1 cfa72c3c9f17 1a0b28637b5f 516a4f51e199 c5a9fa7c8517
	I0330 09:23:56.039750   38578 ssh_runner.go:235] Completed: docker stop 2c299d1fedb1 3738b13d703d 958a73a1ac4a 9934f7a19853 f5f6103c54f8 adf9b140b0ef 88ee712aafb8 307ce6709b46 cb14f46b5bff 6b4ea2ac78bd f6213210269d 10f4c300b06d 416ad712fdc1 cfa72c3c9f17 1a0b28637b5f 516a4f51e199 c5a9fa7c8517: (1.133079841s)
	I0330 09:23:56.039852   38578 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0330 09:23:56.080144   38578 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:23:56.097138   38578 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Mar 30 16:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Mar 30 16:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5819 Mar 30 16:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Mar 30 16:20 /etc/kubernetes/scheduler.conf
	
	I0330 09:23:56.097207   38578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0330 09:23:56.106188   38578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0330 09:23:56.114074   38578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0330 09:23:56.121792   38578 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0330 09:23:56.129438   38578 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:23:56.140530   38578 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0330 09:23:56.140554   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:23:56.215233   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:23:56.933067   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:23:57.081292   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:23:57.134363   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:23:57.196403   38578 api_server.go:51] waiting for apiserver process to appear ...
	I0330 09:23:57.196475   38578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:23:57.709581   38578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:23:58.209507   38578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:23:58.248572   38578 api_server.go:71] duration metric: took 1.052175785s to wait for apiserver process to appear ...
	I0330 09:23:58.248631   38578 api_server.go:87] waiting for apiserver healthz status ...
	I0330 09:23:58.248646   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:24:00.247693   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0330 09:24:00.247714   38578 api_server.go:102] status: https://127.0.0.1:57640/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0330 09:24:00.747928   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:24:00.753480   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0330 09:24:00.753492   38578 api_server.go:102] status: https://127.0.0.1:57640/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:24:01.247950   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:24:01.253449   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0330 09:24:01.253465   38578 api_server.go:102] status: https://127.0.0.1:57640/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:24:01.747865   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:24:01.754273   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 200:
	ok
	I0330 09:24:01.761485   38578 api_server.go:140] control plane version: v1.27.0-rc.0
	I0330 09:24:01.761495   38578 api_server.go:130] duration metric: took 3.512829774s to wait for apiserver health ...
	I0330 09:24:01.761504   38578 cni.go:84] Creating CNI manager for ""
	I0330 09:24:01.761511   38578 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:24:01.784583   38578 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0330 09:24:01.804774   38578 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0330 09:24:01.814705   38578 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0330 09:24:01.828347   38578 system_pods.go:43] waiting for kube-system pods to appear ...
	I0330 09:24:01.834371   38578 system_pods.go:59] 5 kube-system pods found
	I0330 09:24:01.834386   38578 system_pods.go:61] "etcd-kubernetes-upgrade-185000" [80464f7c-1133-49a5-bae8-8914a8404e8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0330 09:24:01.834392   38578 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-185000" [be9095f4-3c40-4d93-906b-f661614a50fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0330 09:24:01.834401   38578 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-185000" [d734920b-c12d-4fde-84a4-822eca8fe29b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0330 09:24:01.834407   38578 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-185000" [28d1e6ff-4b71-4af2-be82-5c3a130c5d72] Pending
	I0330 09:24:01.834411   38578 system_pods.go:61] "storage-provisioner" [c612b261-a6e5-476c-ae6f-610dc5770bce] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0330 09:24:01.834417   38578 system_pods.go:74] duration metric: took 6.058777ms to wait for pod list to return data ...
	I0330 09:24:01.834422   38578 node_conditions.go:102] verifying NodePressure condition ...
	I0330 09:24:01.837518   38578 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0330 09:24:01.837532   38578 node_conditions.go:123] node cpu capacity is 6
	I0330 09:24:01.837543   38578 node_conditions.go:105] duration metric: took 3.117057ms to run NodePressure ...
	I0330 09:24:01.837555   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:24:01.968353   38578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0330 09:24:01.975996   38578 ops.go:34] apiserver oom_adj: -16
	I0330 09:24:01.976009   38578 kubeadm.go:637] restartCluster took 9.329049131s
	I0330 09:24:01.976017   38578 kubeadm.go:403] StartCluster complete in 9.3863006s
	I0330 09:24:01.976031   38578 settings.go:142] acquiring lock: {Name:mkee06510b0682aea765fc9cbf62cdda0355bccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:24:01.976116   38578 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:24:01.976752   38578 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:24:01.976990   38578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0330 09:24:01.977015   38578 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0330 09:24:01.977073   38578 addons.go:66] Setting default-storageclass=true in profile "kubernetes-upgrade-185000"
	I0330 09:24:01.977071   38578 addons.go:66] Setting storage-provisioner=true in profile "kubernetes-upgrade-185000"
	I0330 09:24:01.977089   38578 addons.go:228] Setting addon storage-provisioner=true in "kubernetes-upgrade-185000"
	W0330 09:24:01.977095   38578 addons.go:237] addon storage-provisioner should already be in state true
	I0330 09:24:01.977097   38578 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-185000"
	I0330 09:24:01.977129   38578 host.go:66] Checking if "kubernetes-upgrade-185000" exists ...
	I0330 09:24:01.977175   38578 config.go:182] Loaded profile config "kubernetes-upgrade-185000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0330 09:24:01.977368   38578 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-185000 --format={{.State.Status}}
	I0330 09:24:01.977483   38578 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-185000 --format={{.State.Status}}
	I0330 09:24:01.977469   38578 kapi.go:59] client config for kubernetes-upgrade-185000: &rest.Config{Host:"https://127.0.0.1:57640", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.key", CAFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24f0420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0330 09:24:01.983798   38578 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-185000" context rescaled to 1 replicas
	I0330 09:24:01.983828   38578 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 09:24:02.006390   38578 out.go:177] * Verifying Kubernetes components...
	I0330 09:24:02.047198   38578 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:24:02.060379   38578 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0330 09:24:02.086360   38578 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0330 09:24:02.065031   38578 kapi.go:59] client config for kubernetes-upgrade-185000: &rest.Config{Host:"https://127.0.0.1:57640", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubernetes-upgrade-185000/client.key", CAFile:"/Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24f0420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0330 09:24:02.067804   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:24:02.096166   38578 addons.go:228] Setting addon default-storageclass=true in "kubernetes-upgrade-185000"
	W0330 09:24:02.107402   38578 addons.go:237] addon default-storageclass should already be in state true
	I0330 09:24:02.107447   38578 host.go:66] Checking if "kubernetes-upgrade-185000" exists ...
	I0330 09:24:02.107481   38578 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:24:02.107492   38578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0330 09:24:02.107564   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:24:02.108418   38578 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-185000 --format={{.State.Status}}
	I0330 09:24:02.156486   38578 api_server.go:51] waiting for apiserver process to appear ...
	I0330 09:24:02.156570   38578 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:24:02.170638   38578 api_server.go:71] duration metric: took 186.784625ms to wait for apiserver process to appear ...
	I0330 09:24:02.170654   38578 api_server.go:87] waiting for apiserver healthz status ...
	I0330 09:24:02.170663   38578 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:57640/healthz ...
	I0330 09:24:02.177466   38578 api_server.go:278] https://127.0.0.1:57640/healthz returned 200:
	ok
	I0330 09:24:02.179517   38578 api_server.go:140] control plane version: v1.27.0-rc.0
	I0330 09:24:02.179533   38578 api_server.go:130] duration metric: took 8.871663ms to wait for apiserver health ...
	I0330 09:24:02.179540   38578 system_pods.go:43] waiting for kube-system pods to appear ...
	I0330 09:24:02.181064   38578 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0330 09:24:02.181078   38578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0330 09:24:02.181161   38578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-185000
	I0330 09:24:02.181982   38578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57636 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:24:02.184886   38578 system_pods.go:59] 5 kube-system pods found
	I0330 09:24:02.184913   38578 system_pods.go:61] "etcd-kubernetes-upgrade-185000" [80464f7c-1133-49a5-bae8-8914a8404e8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0330 09:24:02.184956   38578 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-185000" [be9095f4-3c40-4d93-906b-f661614a50fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0330 09:24:02.184977   38578 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-185000" [d734920b-c12d-4fde-84a4-822eca8fe29b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0330 09:24:02.184985   38578 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-185000" [28d1e6ff-4b71-4af2-be82-5c3a130c5d72] Pending
	I0330 09:24:02.184991   38578 system_pods.go:61] "storage-provisioner" [c612b261-a6e5-476c-ae6f-610dc5770bce] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0330 09:24:02.184996   38578 system_pods.go:74] duration metric: took 5.450921ms to wait for pod list to return data ...
	I0330 09:24:02.185002   38578 kubeadm.go:578] duration metric: took 201.153892ms to wait for : map[apiserver:true system_pods:true] ...
	I0330 09:24:02.185014   38578 node_conditions.go:102] verifying NodePressure condition ...
	I0330 09:24:02.189006   38578 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0330 09:24:02.189039   38578 node_conditions.go:123] node cpu capacity is 6
	I0330 09:24:02.189051   38578 node_conditions.go:105] duration metric: took 4.032571ms to run NodePressure ...
	I0330 09:24:02.189058   38578 start.go:228] waiting for startup goroutines ...
	I0330 09:24:02.247212   38578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57636 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/kubernetes-upgrade-185000/id_rsa Username:docker}
	I0330 09:24:02.277495   38578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:24:02.358791   38578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0330 09:24:02.971322   38578 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0330 09:24:03.013242   38578 addons.go:499] enable addons completed in 1.036196093s: enabled=[storage-provisioner default-storageclass]
	I0330 09:24:03.013277   38578 start.go:233] waiting for cluster config update ...
	I0330 09:24:03.013294   38578 start.go:242] writing updated cluster config ...
	I0330 09:24:03.013727   38578 ssh_runner.go:195] Run: rm -f paused
	I0330 09:24:03.060609   38578 start.go:557] kubectl: 1.25.4, cluster: 1.27.0-rc.0 (minor skew: 2)
	I0330 09:24:03.082111   38578 out.go:177] 
	W0330 09:24:03.103346   38578 out.go:239] ! /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.27.0-rc.0.
	I0330 09:24:03.124283   38578 out.go:177]   - Want kubectl v1.27.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0330 09:24:03.146170   38578 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-185000" cluster and "default" namespace by default
	I0330 09:24:01.091463   38435 pod_ready.go:102] pod "coredns-787d4945fb-5nkxs" in "kube-system" namespace has status "Ready":"False"
	I0330 09:24:03.140756   38435 pod_ready.go:102] pod "coredns-787d4945fb-5nkxs" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-03-30 16:22:04 UTC, end at Thu 2023-03-30 16:24:04 UTC. --
	Mar 30 16:23:51 kubernetes-upgrade-185000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Start docker client with request timeout 0s"
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Loaded network plugin cni"
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Docker cri networking managed by network plugin cni"
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Docker Info: &{ID:71101480-2689-4dc6-8e13-4a291c1593bb Containers:16 ContainersRunning:6 ContainersPaused:0 ContainersStopped:10 Images:15 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:58 SystemTime:2023-03-30T16:23:51.641105322Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 KernelVersion:5.15.49-linuxkit Opera
tingSystem:Ubuntu 20.04.5 LTS OSVersion:20.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000266620 NCPU:6 MemTotal:6231715840 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:kubernetes-upgrade-185000 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:23.0.1 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin name=cgroup
ns] ProductLicense: DefaultAddressPools:[] Warnings:[]}"
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Setting cgroupDriver cgroupfs"
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Mar 30 16:23:51 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:51Z" level=info msg="Start cri-dockerd grpc backend"
	Mar 30 16:23:51 kubernetes-upgrade-185000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Mar 30 16:23:54 kubernetes-upgrade-185000 dockerd[13103]: time="2023-03-30T16:23:54.976545639Z" level=info msg="ignoring event" container=adf9b140b0efe636eee84250f68e5635f4ed504a5278eebf01b6a3ecd2634d08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 30 16:23:55 kubernetes-upgrade-185000 dockerd[13103]: time="2023-03-30T16:23:55.036807228Z" level=info msg="ignoring event" container=f5f6103c54f8683d42bd367880028ad234040533e7d7f9936a6c28a7edb92579 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 30 16:23:55 kubernetes-upgrade-185000 dockerd[13103]: time="2023-03-30T16:23:55.042100766Z" level=info msg="ignoring event" container=88ee712aafb8b3e37c330ab443c39f1392039c4f2b5a592c05981bdcba41f4af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 30 16:23:55 kubernetes-upgrade-185000 dockerd[13103]: time="2023-03-30T16:23:55.044565939Z" level=info msg="ignoring event" container=2c299d1fedb1ee75dca04944fa22712f3290aee8b633e1caee6d7cc2088cf260 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 30 16:23:55 kubernetes-upgrade-185000 dockerd[13103]: time="2023-03-30T16:23:55.051012860Z" level=info msg="ignoring event" container=3738b13d703d672038916a0e5f268462f5fb37bfbd266d1bf9b9340ec1595481 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 30 16:23:55 kubernetes-upgrade-185000 dockerd[13103]: time="2023-03-30T16:23:55.051220753Z" level=info msg="ignoring event" container=958a73a1ac4a00b1d611dd7cbaefda2b0997bea7cc89b7e001b2b83cff9397d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 30 16:23:55 kubernetes-upgrade-185000 dockerd[13103]: time="2023-03-30T16:23:55.957988675Z" level=info msg="ignoring event" container=9934f7a19853948a591bf34103b8073d716cd613d41d4be9b4c56f2027f353d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 30 16:23:56 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac841ac27337c8601e9743dcd95f67f1e2c2c75024ecfada0f1698b98cb610f9/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 30 16:23:56 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a1c3bb2d67b9a83d8e4221d7c55dda290ff7f931365613b173fa131b3a1ef39/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 30 16:23:56 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97a7232ceaee4a76fa8395eaa7dde0861baac666205ca9f3bb7980a45c2f6931/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 30 16:23:56 kubernetes-upgrade-185000 cri-dockerd[13662]: W0330 16:23:56.078715   13662 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Mar 30 16:23:56 kubernetes-upgrade-185000 cri-dockerd[13662]: time="2023-03-30T16:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/236704b1e783ff442ffe879dfd2d9db306465785f52d2d4c4a0cd55e5604adbe/resolv.conf as [nameserver 192.168.65.2 options ndots:0]"
	Mar 30 16:23:56 kubernetes-upgrade-185000 cri-dockerd[13662]: W0330 16:23:56.272816   13662 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	56929905ad179       d468edbd6d11a       7 seconds ago       Running             kube-scheduler            2                   1a1c3bb2d67b9
	5e3ed13b4bf56       9f9d741d7f1c5       7 seconds ago       Running             kube-controller-manager   2                   97a7232ceaee4
	e463a469c8286       2e5f542d09de7       7 seconds ago       Running             kube-apiserver            2                   236704b1e783f
	bf837999921ea       86b6af7dd652c       7 seconds ago       Running             etcd                      2                   ac841ac27337c
	2c299d1fedb1e       d468edbd6d11a       11 seconds ago      Exited              kube-scheduler            1                   3738b13d703d6
	958a73a1ac4a0       86b6af7dd652c       14 seconds ago      Exited              etcd                      1                   f5f6103c54f86
	9934f7a198539       2e5f542d09de7       14 seconds ago      Exited              kube-apiserver            1                   adf9b140b0efe
	6b4ea2ac78bde       9f9d741d7f1c5       36 seconds ago      Exited              kube-controller-manager   1                   516a4f51e1999
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-185000
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-185000
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 30 Mar 2023 16:22:24 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-185000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 30 Mar 2023 16:24:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 30 Mar 2023 16:24:00 +0000   Thu, 30 Mar 2023 16:22:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 30 Mar 2023 16:24:00 +0000   Thu, 30 Mar 2023 16:22:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 30 Mar 2023 16:24:00 +0000   Thu, 30 Mar 2023 16:22:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 30 Mar 2023 16:24:00 +0000   Thu, 30 Mar 2023 16:23:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-185000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 b249c14bbd9147e887f6315aff00ef06
	  System UUID:                b249c14bbd9147e887f6315aff00ef06
	  Boot ID:                    b745a502-078f-4e66-a21d-1fdb66506a40
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.27.0-rc.0
	  Kube-Proxy Version:         v1.27.0-rc.0
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-185000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 kube-apiserver-kubernetes-upgrade-185000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-185000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-kubernetes-upgrade-185000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From     Message
	  ----    ------                   ----                 ----     -------
	  Normal  NodeAllocatableEnforced  104s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  103s (x8 over 104s)  kubelet  Node kubernetes-upgrade-185000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 104s)  kubelet  Node kubernetes-upgrade-185000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 104s)  kubelet  Node kubernetes-upgrade-185000 status is now: NodeHasSufficientPID
	  Normal  Starting                 7s                   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)      kubelet  Node kubernetes-upgrade-185000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)      kubelet  Node kubernetes-upgrade-185000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)      kubelet  Node kubernetes-upgrade-185000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                   kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000069] FS-Cache: O-key=[8] '92ec050500000000'
	[  +0.000058] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000063] FS-Cache: N-cookie d=00000000841d7711{9p.inode} n=00000000b5c08eef
	[  +0.000075] FS-Cache: N-key=[8] '92ec050500000000'
	[  +0.002671] FS-Cache: Duplicate cookie detected
	[  +0.000030] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000085] FS-Cache: O-cookie d=00000000841d7711{9p.inode} n=00000000195e1b81
	[  +0.000062] FS-Cache: O-key=[8] '92ec050500000000'
	[  +0.000063] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000054] FS-Cache: N-cookie d=00000000841d7711{9p.inode} n=00000000658f0f61
	[  +0.000049] FS-Cache: N-key=[8] '92ec050500000000'
	[  +3.552031] FS-Cache: Duplicate cookie detected
	[  +0.000056] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000038] FS-Cache: O-cookie d=00000000841d7711{9p.inode} n=0000000041fed827
	[  +0.000076] FS-Cache: O-key=[8] '91ec050500000000'
	[  +0.000046] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000054] FS-Cache: N-cookie d=00000000841d7711{9p.inode} n=00000000c6ec446f
	[  +0.000072] FS-Cache: N-key=[8] '91ec050500000000'
	[  +0.507564] FS-Cache: Duplicate cookie detected
	[  +0.000076] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000040] FS-Cache: O-cookie d=00000000841d7711{9p.inode} n=00000000af66c14c
	[  +0.000069] FS-Cache: O-key=[8] '98ec050500000000'
	[  +0.000040] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000056] FS-Cache: N-cookie d=00000000841d7711{9p.inode} n=000000009516b7ed
	[  +0.000073] FS-Cache: N-key=[8] '98ec050500000000'
	
	* 
	* ==> etcd [958a73a1ac4a] <==
	* {"level":"info","ts":"2023-03-30T16:23:51.275Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.3"}
	{"level":"info","ts":"2023-03-30T16:23:51.276Z","caller":"membership/cluster.go:576","msg":"updated cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","from":"3.3","to":"3.5"}
	{"level":"info","ts":"2023-03-30T16:23:51.275Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-30T16:23:51.276Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-30T16:23:51.276Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-30T16:23:52.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-03-30T16:23:52.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-03-30T16:23:52.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-03-30T16:23:52.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-03-30T16:23:52.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-30T16:23:52.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-03-30T16:23:52.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-30T16:23:52.369Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-185000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-30T16:23:52.369Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-30T16:23:52.369Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-03-30T16:23:52.371Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-30T16:23:52.372Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-30T16:23:52.372Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-30T16:23:52.373Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-03-30T16:23:54.948Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-03-30T16:23:54.948Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-185000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-03-30T16:23:54.960Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-03-30T16:23:54.963Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-30T16:23:54.965Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-30T16:23:54.965Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-185000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [bf837999921e] <==
	* {"level":"info","ts":"2023-03-30T16:23:57.947Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-03-30T16:23:57.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-03-30T16:23:57.948Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-03-30T16:23:57.948Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.3"}
	{"level":"info","ts":"2023-03-30T16:23:57.948Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.3"}
	{"level":"info","ts":"2023-03-30T16:23:57.951Z","caller":"membership/cluster.go:576","msg":"updated cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","from":"3.3","to":"3.5"}
	{"level":"info","ts":"2023-03-30T16:23:57.951Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-30T16:23:57.952Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-30T16:23:57.952Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-30T16:23:57.952Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-03-30T16:23:57.952Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-30T16:23:59.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
	{"level":"info","ts":"2023-03-30T16:23:59.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-03-30T16:23:59.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-03-30T16:23:59.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
	{"level":"info","ts":"2023-03-30T16:23:59.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-03-30T16:23:59.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
	{"level":"info","ts":"2023-03-30T16:23:59.338Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-03-30T16:23:59.339Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-185000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-30T16:23:59.339Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-30T16:23:59.339Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-30T16:23:59.340Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-30T16:23:59.340Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-30T16:23:59.340Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-03-30T16:23:59.340Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  16:24:05 up  2:23,  0 users,  load average: 3.25, 2.19, 1.63
	Linux kubernetes-upgrade-185000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [9934f7a19853] <==
	* I0330 16:23:54.956135       1 controller.go:228] Shutting down kubernetes service endpoint reconciler
	W0330 16:23:54.956897       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0330 16:23:54.957022       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0330 16:23:54.957926       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [e463a469c828] <==
	* I0330 16:24:00.238500       1 apf_controller.go:361] Starting API Priority and Fairness config controller
	I0330 16:24:00.238914       1 controller.go:121] Starting legacy_token_tracking_controller
	I0330 16:24:00.238947       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0330 16:24:00.241704       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0330 16:24:00.241754       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0330 16:24:00.243490       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0330 16:24:00.243499       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0330 16:24:00.259681       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0330 16:24:00.269773       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0330 16:24:00.338223       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0330 16:24:00.338524       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0330 16:24:00.338596       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0330 16:24:00.338527       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0330 16:24:00.338535       1 cache.go:39] Caches are synced for autoregister controller
	I0330 16:24:00.338331       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0330 16:24:00.338925       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0330 16:24:00.338967       1 shared_informer.go:318] Caches are synced for configmaps
	I0330 16:24:00.344162       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0330 16:24:01.049956       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0330 16:24:01.242042       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0330 16:24:01.908268       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0330 16:24:01.914727       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0330 16:24:01.937383       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0330 16:24:01.953957       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0330 16:24:01.960471       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [5e3ed13b4bf5] <==
	* I0330 16:23:58.772818       1 serving.go:348] Generated self-signed cert in-memory
	I0330 16:23:59.064386       1 controllermanager.go:187] "Starting" version="v1.27.0-rc.0"
	I0330 16:23:59.064439       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0330 16:23:59.065438       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0330 16:23:59.065555       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0330 16:23:59.065956       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0330 16:23:59.065989       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0330 16:24:02.344165       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0330 16:24:02.348161       1 controllermanager.go:638] "Started controller" controller="job"
	I0330 16:24:02.348386       1 job_controller.go:202] Starting job controller
	I0330 16:24:02.348393       1 shared_informer.go:311] Waiting for caches to sync for job
	I0330 16:24:02.350671       1 controllermanager.go:638] "Started controller" controller="csrapproving"
	I0330 16:24:02.350868       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0330 16:24:02.350878       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0330 16:24:02.353697       1 controllermanager.go:638] "Started controller" controller="ttl-after-finished"
	I0330 16:24:02.354042       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0330 16:24:02.354053       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0330 16:24:02.364161       1 controllermanager.go:638] "Started controller" controller="ttl"
	I0330 16:24:02.364378       1 ttl_controller.go:124] "Starting TTL controller"
	I0330 16:24:02.364387       1 shared_informer.go:311] Waiting for caches to sync for TTL
	E0330 16:24:02.367264       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0330 16:24:02.367317       1 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
	I0330 16:24:02.370003       1 controllermanager.go:638] "Started controller" controller="csrcleaner"
	I0330 16:24:02.370102       1 cleaner.go:82] Starting CSR cleaner controller
	I0330 16:24:02.445093       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-controller-manager [6b4ea2ac78bd] <==
	* I0330 16:23:47.666966       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0330 16:23:47.673685       1 controllermanager.go:638] "Started controller" controller="ephemeral-volume"
	I0330 16:23:47.673836       1 controller.go:169] "Starting ephemeral volume controller"
	I0330 16:23:47.673886       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0330 16:23:47.681756       1 controllermanager.go:638] "Started controller" controller="endpointslicemirroring"
	I0330 16:23:47.681904       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0330 16:23:47.681912       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0330 16:23:47.693820       1 controllermanager.go:638] "Started controller" controller="horizontalpodautoscaling"
	I0330 16:23:47.693844       1 horizontal.go:200] "Starting HPA controller"
	I0330 16:23:47.694000       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0330 16:23:47.700106       1 controllermanager.go:638] "Started controller" controller="daemonset"
	I0330 16:23:47.700226       1 daemon_controller.go:289] "Starting daemon sets controller"
	I0330 16:23:47.700233       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0330 16:23:47.818521       1 controllermanager.go:638] "Started controller" controller="job"
	I0330 16:23:47.818631       1 job_controller.go:202] Starting job controller
	I0330 16:23:47.818638       1 shared_informer.go:311] Waiting for caches to sync for job
	I0330 16:23:47.969484       1 controllermanager.go:638] "Started controller" controller="tokencleaner"
	I0330 16:23:47.969547       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0330 16:23:47.969558       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0330 16:23:47.969562       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0330 16:23:48.017570       1 node_lifecycle_controller.go:431] "Controller will reconcile labels"
	I0330 16:23:48.017661       1 controllermanager.go:638] "Started controller" controller="nodelifecycle"
	I0330 16:23:48.017767       1 node_lifecycle_controller.go:465] "Sending events to api server"
	I0330 16:23:48.017801       1 node_lifecycle_controller.go:476] "Starting node controller"
	I0330 16:23:48.017837       1 shared_informer.go:311] Waiting for caches to sync for taint
	
	* 
	* ==> kube-scheduler [2c299d1fedb1] <==
	* I0330 16:23:53.867553       1 serving.go:348] Generated self-signed cert in-memory
	I0330 16:23:54.200060       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.0-rc.0"
	I0330 16:23:54.200109       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0330 16:23:54.204348       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0330 16:23:54.204385       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0330 16:23:54.204387       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0330 16:23:54.204397       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0330 16:23:54.234094       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0330 16:23:54.234434       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0330 16:23:54.235195       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0330 16:23:54.235274       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0330 16:23:54.306003       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0330 16:23:54.306006       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0330 16:23:54.335519       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0330 16:23:54.953214       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0330 16:23:54.953301       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0330 16:23:54.953418       1 scheduling_queue.go:1137] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0330 16:23:54.953450       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [56929905ad17] <==
	* I0330 16:23:58.664526       1 serving.go:348] Generated self-signed cert in-memory
	I0330 16:24:00.344400       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.0-rc.0"
	I0330 16:24:00.344439       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0330 16:24:00.347534       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0330 16:24:00.347594       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0330 16:24:00.347595       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0330 16:24:00.347604       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0330 16:24:00.347604       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0330 16:24:00.347612       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0330 16:24:00.348100       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0330 16:24:00.348206       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0330 16:24:00.447736       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0330 16:24:00.447855       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0330 16:24:00.447926       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-03-30 16:22:04 UTC, end at Thu 2023-03-30 16:24:06 UTC. --
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.508222   14441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea7c189eacdc19d59b56c97fad1af810-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-185000\" (UID: \"ea7c189eacdc19d59b56c97fad1af810\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.508324   14441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea7c189eacdc19d59b56c97fad1af810-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-185000\" (UID: \"ea7c189eacdc19d59b56c97fad1af810\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.508374   14441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b77707386db23f123f6f9a42d0670f1-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-185000\" (UID: \"5b77707386db23f123f6f9a42d0670f1\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.508399   14441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea7c189eacdc19d59b56c97fad1af810-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-185000\" (UID: \"ea7c189eacdc19d59b56c97fad1af810\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.508485   14441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea7c189eacdc19d59b56c97fad1af810-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-185000\" (UID: \"ea7c189eacdc19d59b56c97fad1af810\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.508548   14441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea7c189eacdc19d59b56c97fad1af810-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-185000\" (UID: \"ea7c189eacdc19d59b56c97fad1af810\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.508573   14441 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea7c189eacdc19d59b56c97fad1af810-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-185000\" (UID: \"ea7c189eacdc19d59b56c97fad1af810\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.553046   14441 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: E0330 16:23:57.553428   14441 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.680251   14441 scope.go:115] "RemoveContainer" containerID="958a73a1ac4a00b1d611dd7cbaefda2b0997bea7cc89b7e001b2b83cff9397d9"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.688552   14441 scope.go:115] "RemoveContainer" containerID="9934f7a19853948a591bf34103b8073d716cd613d41d4be9b4c56f2027f353d1"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.698059   14441 scope.go:115] "RemoveContainer" containerID="6b4ea2ac78bdef24b11c06e338ef50bbc1ad3da1728fe1d835df206e2a233d3c"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.704835   14441 scope.go:115] "RemoveContainer" containerID="2c299d1fedb1ee75dca04944fa22712f3290aee8b633e1caee6d7cc2088cf260"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: E0330 16:23:57.835849   14441 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-185000?timeout=10s\": dial tcp 192.168.67.2:8443: connect: connection refused" interval="800ms"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:57.965199   14441 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-185000"
	Mar 30 16:23:57 kubernetes-upgrade-185000 kubelet[14441]: E0330 16:23:57.965556   14441 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.67.2:8443: connect: connection refused" node="kubernetes-upgrade-185000"
	Mar 30 16:23:58 kubernetes-upgrade-185000 kubelet[14441]: W0330 16:23:58.042380   14441 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Mar 30 16:23:58 kubernetes-upgrade-185000 kubelet[14441]: E0330 16:23:58.042455   14441 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Mar 30 16:23:58 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:23:58.775812   14441 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-185000"
	Mar 30 16:24:00 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:24:00.358515   14441 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-185000"
	Mar 30 16:24:00 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:24:00.358588   14441 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-185000"
	Mar 30 16:24:00 kubernetes-upgrade-185000 kubelet[14441]: E0330 16:24:00.472881   14441 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-185000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-185000"
	Mar 30 16:24:00 kubernetes-upgrade-185000 kubelet[14441]: E0330 16:24:00.640164   14441 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-kubernetes-upgrade-185000\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-185000"
	Mar 30 16:24:01 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:24:01.190313   14441 apiserver.go:52] "Watching apiserver"
	Mar 30 16:24:01 kubernetes-upgrade-185000 kubelet[14441]: I0330 16:24:01.207127   14441 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-185000 -n kubernetes-upgrade-185000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-185000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-scheduler-kubernetes-upgrade-185000 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-185000 describe pod kube-scheduler-kubernetes-upgrade-185000 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-185000 describe pod kube-scheduler-kubernetes-upgrade-185000 storage-provisioner: exit status 1 (54.532743ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-scheduler-kubernetes-upgrade-185000" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-185000 describe pod kube-scheduler-kubernetes-upgrade-185000 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-185000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-185000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-185000: (2.978193969s)
--- FAIL: TestKubernetesUpgrade (379.57s)

                                                
                                    
x
+
TestMissingContainerUpgrade (67.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2703033494.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2703033494.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker : exit status 78 (48.961512843s)

                                                
                                                
-- stdout --
	* [missing-upgrade-491000] minikube v1.9.1 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-491000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-491000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.48 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.69 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 7.80 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.84 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 64.43 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 71.88 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.31 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.82 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 97.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 111.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 121.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 132.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 139.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 198.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 220.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 230.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 239.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 255.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 260.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 278.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 338.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 346.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 356.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 362.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 369.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 391.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 409.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 414.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 433.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 437.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 453.54 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 460.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 467.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 477.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 493.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.93 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 511.32 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 528.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 538.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:17:13.836548869 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-491000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:17:33.124813971 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2703033494.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2703033494.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker : exit status 70 (4.412407729s)

                                                
                                                
-- stdout --
	* [missing-upgrade-491000] minikube v1.9.1 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-491000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-491000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2703033494.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.2703033494.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker : exit status 70 (4.408499572s)

                                                
                                                
-- stdout --
	* [missing-upgrade-491000] minikube v1.9.1 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-491000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-491000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-03-30 09:17:47.3054 -0700 PDT m=+2424.569056285
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-491000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-491000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f37996a47f18a4eab4041ed257ab653545eae187eb78638c277d9639b6ab6c6f",
	        "Created": "2023-03-30T16:17:22.13746987Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 561142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:17:22.371897229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/f37996a47f18a4eab4041ed257ab653545eae187eb78638c277d9639b6ab6c6f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f37996a47f18a4eab4041ed257ab653545eae187eb78638c277d9639b6ab6c6f/hostname",
	        "HostsPath": "/var/lib/docker/containers/f37996a47f18a4eab4041ed257ab653545eae187eb78638c277d9639b6ab6c6f/hosts",
	        "LogPath": "/var/lib/docker/containers/f37996a47f18a4eab4041ed257ab653545eae187eb78638c277d9639b6ab6c6f/f37996a47f18a4eab4041ed257ab653545eae187eb78638c277d9639b6ab6c6f-json.log",
	        "Name": "/missing-upgrade-491000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-491000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d0bb9fa7ad60d8e0388183e5e6900890237bb01aeb6475d4b1f3c29c1c628f1b-init/diff:/var/lib/docker/overlay2/cf01fb17109cfe890b79100452271544674c79de1e99e7cb554dd9846dd2dc20/diff:/var/lib/docker/overlay2/2ee96a5e3cd957476bbbb0dedec995768fbfa53b883e890355fe05c4edf51dec/diff:/var/lib/docker/overlay2/5a883bc8f2bfc1bd8a0d79cdf0c589f76f46b3712d9ebadde53b16c358448176/diff:/var/lib/docker/overlay2/b66f255016d8fd6edade780389c232a6b53e24204ade62186925069b2ad55ac0/diff:/var/lib/docker/overlay2/20cec7edb46d540d3c7a50816cd660a7f5b68a539c97bc2f4c5de5d958a7052b/diff:/var/lib/docker/overlay2/eb605471f3c21ba6238e73b8020447e2ecb4554c808c3ba8e9b0e2d4387cb15e/diff:/var/lib/docker/overlay2/01b084f0312a32d2f204e50a20c943943e4df09ae1cf39e2ef13117e221bb8a9/diff:/var/lib/docker/overlay2/021330f16a7ab5a5c536939c8a71616c5da3103a1603c93db60b99224076ab60/diff:/var/lib/docker/overlay2/3f7e0648776bc5e47c8a5d6a5c3e88b721c09be9811331528a8fb97aa9fa51ae/diff:/var/lib/docker/overlay2/e96ef2
541033bc1a9853ec6a5b4b1a4a8f35419ec21c7afdcd994b2a3dd7180a/diff:/var/lib/docker/overlay2/24b61f762b2638958ff42473d8cad19edf2953806250fe230588819922ab61a2/diff:/var/lib/docker/overlay2/7e4b405a358035781bd33e603483d85a8f2be6037719b265ae858066a3e744b3/diff:/var/lib/docker/overlay2/b6d2880761fd066f62c11e70c25b464e0a080787454d2c1a974bba59f76c6bc3/diff:/var/lib/docker/overlay2/68368955348bb279c112e5671f1643ed1cd02b5533983ea062ec2f14deb0e6b4/diff:/var/lib/docker/overlay2/5f3cc28fef90b9acd47130a59b90339683764b990d4d5687433f80548ea47109/diff:/var/lib/docker/overlay2/47ed3869d356b737fc0bf6fb764c60ed1c1677a4dc7bf8c1a8b4d170cb46eb07/diff:/var/lib/docker/overlay2/d7c31f5bb479e33a2ea9ddce92847a1e0b63dfc625149d726be3ac5619355542/diff:/var/lib/docker/overlay2/72696531166a9a0894848960ac635a91e06f1ef9f8132eaf83a048525b06c980/diff:/var/lib/docker/overlay2/4cdc8deeb54bb5c440e9bb3f0914cc3e8ae9dfc04c1226f9d0363962c496919c/diff:/var/lib/docker/overlay2/13247dda6599146b42df1376d0c47d3094006c754e397639d54f68f4bec71990/diff:/var/lib/d
ocker/overlay2/aa5e2788242d209fb8846b24bd1585829693186a5a036f0d36de2b9cc10dcc04/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d0bb9fa7ad60d8e0388183e5e6900890237bb01aeb6475d4b1f3c29c1c628f1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d0bb9fa7ad60d8e0388183e5e6900890237bb01aeb6475d4b1f3c29c1c628f1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d0bb9fa7ad60d8e0388183e5e6900890237bb01aeb6475d4b1f3c29c1c628f1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-491000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-491000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-491000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-491000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-491000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "119b1e4978adf216d644861b94485baef759f480839aad1f2a9e3fa7fe7e99e9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57316"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57314"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57315"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/119b1e4978ad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "1eaa2fbb33799fbc5fa80d5f56e7476714f2b3db8467ef9778535fc67f5fae87",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "fb5d7a6f98b327fc28f84e153589aa4b81d3f891558b99e2f18277157f8722f9",
	                    "EndpointID": "1eaa2fbb33799fbc5fa80d5f56e7476714f2b3db8467ef9778535fc67f5fae87",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-491000 -n missing-upgrade-491000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-491000 -n missing-upgrade-491000: exit status 6 (397.19888ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:17:47.756312   36274 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-491000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-491000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-491000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-491000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-491000: (2.402833165s)
--- FAIL: TestMissingContainerUpgrade (67.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.227459556.exe start -p stopped-upgrade-773000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.227459556.exe start -p stopped-upgrade-773000 --memory=2200 --vm-driver=docker : exit status 70 (45.633635638s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-773000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig27771625
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:18:47.246704836 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-773000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:19:06.267312941 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-773000", then "minikube start -p stopped-upgrade-773000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.21 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.45 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.26 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 18.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 26.27 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.88 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.98 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 55.01 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 66.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 78.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 88.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 100.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 112.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 125.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 151.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 164.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 177.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 227.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 254.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 267.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 281.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 306.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 335.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 349.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 374.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 410.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 421.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 436.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 449.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 459.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 469.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 481.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 503.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 517.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 527.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:19:06.267312941 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.227459556.exe start -p stopped-upgrade-773000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.227459556.exe start -p stopped-upgrade-773000 --memory=2200 --vm-driver=docker : exit status 70 (4.483762869s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-773000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig4057375841
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-773000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
E0330 09:19:15.332407   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.227459556.exe start -p stopped-upgrade-773000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.227459556.exe start -p stopped-upgrade-773000 --memory=2200 --vm-driver=docker : exit status 70 (3.430404192s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-773000] minikube v1.9.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3037103843
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-773000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (55.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (252.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-331000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0330 09:29:18.553513   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:23.674163   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:33.981775   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-331000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m12.000413301s)

                                                
                                                
-- stdout --
	* [old-k8s-version-331000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-331000 in cluster old-k8s-version-331000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 09:29:17.245896   42664 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:29:17.246182   42664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:29:17.246188   42664 out.go:309] Setting ErrFile to fd 2...
	I0330 09:29:17.246193   42664 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:29:17.246326   42664 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:29:17.248869   42664 out.go:303] Setting JSON to false
	I0330 09:29:17.272763   42664 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8925,"bootTime":1680184832,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 09:29:17.272875   42664 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 09:29:17.294473   42664 out.go:177] * [old-k8s-version-331000] minikube v1.29.0 on Darwin 13.3
	I0330 09:29:17.336590   42664 notify.go:220] Checking for updates...
	I0330 09:29:17.357514   42664 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 09:29:17.378546   42664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:29:17.399494   42664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 09:29:17.420561   42664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 09:29:17.441412   42664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 09:29:17.483482   42664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 09:29:17.504696   42664 config.go:182] Loaded profile config "kubenet-378000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:29:17.504753   42664 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 09:29:17.588450   42664 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 09:29:17.588561   42664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:29:17.799774   42664 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:56 SystemTime:2023-03-30 16:29:17.643084263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:29:17.821989   42664 out.go:177] * Using the docker driver based on user configuration
	I0330 09:29:17.864392   42664 start.go:295] selected driver: docker
	I0330 09:29:17.864403   42664 start.go:859] validating driver "docker" against <nil>
	I0330 09:29:17.864411   42664 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 09:29:17.867275   42664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:29:18.069612   42664 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-30 16:29:17.923508705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:29:18.069743   42664 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0330 09:29:18.069932   42664 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0330 09:29:18.091935   42664 out.go:177] * Using Docker Desktop driver with root privileges
	I0330 09:29:18.112790   42664 cni.go:84] Creating CNI manager for ""
	I0330 09:29:18.112829   42664 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 09:29:18.112840   42664 start_flags.go:319] config:
	{Name:old-k8s-version-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-331000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:29:18.134661   42664 out.go:177] * Starting control plane node old-k8s-version-331000 in cluster old-k8s-version-331000
	I0330 09:29:18.155618   42664 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 09:29:18.176406   42664 out.go:177] * Pulling base image ...
	I0330 09:29:18.197621   42664 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 09:29:18.197649   42664 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 09:29:18.197681   42664 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0330 09:29:18.197691   42664 cache.go:57] Caching tarball of preloaded images
	I0330 09:29:18.197830   42664 preload.go:174] Found /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0330 09:29:18.197842   42664 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0330 09:29:18.198412   42664 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/config.json ...
	I0330 09:29:18.198505   42664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/config.json: {Name:mk60ffde538431f22a91f0eac7a1f4d1745248d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:29:18.260626   42664 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 09:29:18.260647   42664 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 09:29:18.260674   42664 cache.go:193] Successfully downloaded all kic artifacts
	I0330 09:29:18.260717   42664 start.go:364] acquiring machines lock for old-k8s-version-331000: {Name:mk68a72133bfb0ba0e52354dae23a3d4710ac349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 09:29:18.260870   42664 start.go:368] acquired machines lock for "old-k8s-version-331000" in 141.417µs
	I0330 09:29:18.260900   42664 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-331000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 09:29:18.260991   42664 start.go:125] createHost starting for "" (driver="docker")
	I0330 09:29:18.282639   42664 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0330 09:29:18.282836   42664 start.go:159] libmachine.API.Create for "old-k8s-version-331000" (driver="docker")
	I0330 09:29:18.282864   42664 client.go:168] LocalClient.Create starting
	I0330 09:29:18.282945   42664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem
	I0330 09:29:18.282979   42664 main.go:141] libmachine: Decoding PEM data...
	I0330 09:29:18.282996   42664 main.go:141] libmachine: Parsing certificate...
	I0330 09:29:18.283063   42664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem
	I0330 09:29:18.283087   42664 main.go:141] libmachine: Decoding PEM data...
	I0330 09:29:18.283095   42664 main.go:141] libmachine: Parsing certificate...
	I0330 09:29:18.283605   42664 cli_runner.go:164] Run: docker network inspect old-k8s-version-331000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0330 09:29:18.348067   42664 cli_runner.go:211] docker network inspect old-k8s-version-331000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0330 09:29:18.348168   42664 network_create.go:281] running [docker network inspect old-k8s-version-331000] to gather additional debugging logs...
	I0330 09:29:18.348184   42664 cli_runner.go:164] Run: docker network inspect old-k8s-version-331000
	W0330 09:29:18.408334   42664 cli_runner.go:211] docker network inspect old-k8s-version-331000 returned with exit code 1
	I0330 09:29:18.408363   42664 network_create.go:284] error running [docker network inspect old-k8s-version-331000]: docker network inspect old-k8s-version-331000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-331000
	I0330 09:29:18.408377   42664 network_create.go:286] output of [docker network inspect old-k8s-version-331000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-331000
	
	** /stderr **
	I0330 09:29:18.408460   42664 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0330 09:29:18.471769   42664 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0330 09:29:18.473349   42664 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0330 09:29:18.474879   42664 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0330 09:29:18.475189   42664 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000da97c0}
	I0330 09:29:18.475203   42664 network_create.go:123] attempt to create docker network old-k8s-version-331000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0330 09:29:18.475275   42664 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-331000 old-k8s-version-331000
	I0330 09:29:18.575264   42664 network_create.go:107] docker network old-k8s-version-331000 192.168.76.0/24 created
	I0330 09:29:18.575309   42664 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-331000" container
	I0330 09:29:18.575427   42664 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0330 09:29:18.637501   42664 cli_runner.go:164] Run: docker volume create old-k8s-version-331000 --label name.minikube.sigs.k8s.io=old-k8s-version-331000 --label created_by.minikube.sigs.k8s.io=true
	I0330 09:29:18.707749   42664 oci.go:103] Successfully created a docker volume old-k8s-version-331000
	I0330 09:29:18.707859   42664 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-331000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-331000 --entrypoint /usr/bin/test -v old-k8s-version-331000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -d /var/lib
	I0330 09:29:19.213875   42664 oci.go:107] Successfully prepared a docker volume old-k8s-version-331000
	I0330 09:29:19.213921   42664 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 09:29:19.213937   42664 kic.go:190] Starting extracting preloaded images to volume ...
	I0330 09:29:19.214064   42664 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-331000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir
	I0330 09:29:25.534782   42664 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-331000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 -I lz4 -xf /preloaded.tar -C /extractDir: (6.320515306s)
	I0330 09:29:25.534809   42664 kic.go:199] duration metric: took 6.320822 seconds to extract preloaded images to volume
	I0330 09:29:25.534915   42664 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0330 09:29:25.741417   42664 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-331000 --name old-k8s-version-331000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-331000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-331000 --network old-k8s-version-331000 --ip 192.168.76.2 --volume old-k8s-version-331000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978
	I0330 09:29:26.177842   42664 cli_runner.go:164] Run: docker container inspect old-k8s-version-331000 --format={{.State.Running}}
	I0330 09:29:26.255508   42664 cli_runner.go:164] Run: docker container inspect old-k8s-version-331000 --format={{.State.Status}}
	I0330 09:29:26.330188   42664 cli_runner.go:164] Run: docker exec old-k8s-version-331000 stat /var/lib/dpkg/alternatives/iptables
	I0330 09:29:26.452806   42664 oci.go:144] the created container "old-k8s-version-331000" has a running status.
	I0330 09:29:26.452838   42664 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa...
	I0330 09:29:26.717498   42664 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0330 09:29:26.827722   42664 cli_runner.go:164] Run: docker container inspect old-k8s-version-331000 --format={{.State.Status}}
	I0330 09:29:26.891585   42664 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0330 09:29:26.891605   42664 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-331000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0330 09:29:27.005891   42664 cli_runner.go:164] Run: docker container inspect old-k8s-version-331000 --format={{.State.Status}}
	I0330 09:29:27.067502   42664 machine.go:88] provisioning docker machine ...
	I0330 09:29:27.067550   42664 ubuntu.go:169] provisioning hostname "old-k8s-version-331000"
	I0330 09:29:27.067665   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:27.135603   42664 main.go:141] libmachine: Using SSH client type: native
	I0330 09:29:27.135991   42664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 58897 <nil> <nil>}
	I0330 09:29:27.136014   42664 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-331000 && echo "old-k8s-version-331000" | sudo tee /etc/hostname
	I0330 09:29:27.263548   42664 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-331000
	
	I0330 09:29:27.284284   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:27.348379   42664 main.go:141] libmachine: Using SSH client type: native
	I0330 09:29:27.348726   42664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 58897 <nil> <nil>}
	I0330 09:29:27.348745   42664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-331000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-331000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-331000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 09:29:27.467571   42664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:29:27.467593   42664 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 09:29:27.467620   42664 ubuntu.go:177] setting up certificates
	I0330 09:29:27.467631   42664 provision.go:83] configureAuth start
	I0330 09:29:27.467716   42664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-331000
	I0330 09:29:27.533005   42664 provision.go:138] copyHostCerts
	I0330 09:29:27.533109   42664 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 09:29:27.533117   42664 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 09:29:27.533218   42664 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 09:29:27.533419   42664 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 09:29:27.533425   42664 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 09:29:27.533490   42664 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 09:29:27.533649   42664 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 09:29:27.533655   42664 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 09:29:27.533716   42664 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 09:29:27.533837   42664 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-331000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-331000]
	I0330 09:29:27.729549   42664 provision.go:172] copyRemoteCerts
	I0330 09:29:27.729614   42664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 09:29:27.729680   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:27.795656   42664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58897 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:29:27.885221   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 09:29:27.903795   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0330 09:29:27.921549   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0330 09:29:27.939420   42664 provision.go:86] duration metric: configureAuth took 471.759078ms
	I0330 09:29:27.939435   42664 ubuntu.go:193] setting minikube options for container-runtime
	I0330 09:29:27.939587   42664 config.go:182] Loaded profile config "old-k8s-version-331000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0330 09:29:27.939659   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:28.008366   42664 main.go:141] libmachine: Using SSH client type: native
	I0330 09:29:28.008896   42664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 58897 <nil> <nil>}
	I0330 09:29:28.008912   42664 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 09:29:28.125181   42664 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 09:29:28.125199   42664 ubuntu.go:71] root file system type: overlay
	I0330 09:29:28.125320   42664 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 09:29:28.125426   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:28.188947   42664 main.go:141] libmachine: Using SSH client type: native
	I0330 09:29:28.189345   42664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 58897 <nil> <nil>}
	I0330 09:29:28.189403   42664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 09:29:28.316210   42664 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 09:29:28.316311   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:28.377705   42664 main.go:141] libmachine: Using SSH client type: native
	I0330 09:29:28.378068   42664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 58897 <nil> <nil>}
	I0330 09:29:28.378081   42664 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 09:29:29.009951   42664 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-03-30 16:29:28.313246731 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0330 09:29:29.009971   42664 machine.go:91] provisioned docker machine in 1.94242981s
	I0330 09:29:29.009980   42664 client.go:171] LocalClient.Create took 10.727031116s
	I0330 09:29:29.009998   42664 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-331000" took 10.727084257s
	I0330 09:29:29.010009   42664 start.go:300] post-start starting for "old-k8s-version-331000" (driver="docker")
	I0330 09:29:29.010019   42664 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 09:29:29.010102   42664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 09:29:29.010163   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:29.084773   42664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58897 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:29:29.174701   42664 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 09:29:29.178552   42664 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 09:29:29.178569   42664 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 09:29:29.178577   42664 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 09:29:29.178582   42664 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 09:29:29.178592   42664 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 09:29:29.178683   42664 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 09:29:29.178857   42664 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 09:29:29.179043   42664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 09:29:29.187821   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:29:29.208587   42664 start.go:303] post-start completed in 198.567296ms
	I0330 09:29:29.209152   42664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-331000
	I0330 09:29:29.281261   42664 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/config.json ...
	I0330 09:29:29.281786   42664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:29:29.281871   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:29.346709   42664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58897 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:29:29.434477   42664 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 09:29:29.439803   42664 start.go:128] duration metric: createHost completed in 11.178717519s
	I0330 09:29:29.439826   42664 start.go:83] releasing machines lock for "old-k8s-version-331000", held for 11.178864098s
	I0330 09:29:29.439933   42664 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-331000
	I0330 09:29:29.511115   42664 ssh_runner.go:195] Run: cat /version.json
	I0330 09:29:29.511134   42664 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0330 09:29:29.511182   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:29.511232   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:29.587480   42664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58897 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:29:29.587560   42664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58897 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:29:29.673232   42664 ssh_runner.go:195] Run: systemctl --version
	I0330 09:29:29.953982   42664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0330 09:29:29.959544   42664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0330 09:29:29.980911   42664 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0330 09:29:29.980982   42664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0330 09:29:29.995708   42664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0330 09:29:30.004022   42664 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0330 09:29:30.004035   42664 start.go:481] detecting cgroup driver to use...
	I0330 09:29:30.004045   42664 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:29:30.004123   42664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:29:30.018106   42664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0330 09:29:30.026834   42664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 09:29:30.036205   42664 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 09:29:30.036263   42664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 09:29:30.045742   42664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:29:30.054898   42664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 09:29:30.063787   42664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:29:30.072921   42664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 09:29:30.081036   42664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 09:29:30.089998   42664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 09:29:30.097765   42664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 09:29:30.105245   42664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:29:30.183045   42664 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 09:29:30.258763   42664 start.go:481] detecting cgroup driver to use...
	I0330 09:29:30.258784   42664 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:29:30.258853   42664 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 09:29:30.275377   42664 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 09:29:30.275450   42664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 09:29:30.285703   42664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:29:30.303406   42664 ssh_runner.go:195] Run: which cri-dockerd
	I0330 09:29:30.307925   42664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 09:29:30.316915   42664 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (184 bytes)
	I0330 09:29:30.346978   42664 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 09:29:30.455524   42664 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 09:29:30.519704   42664 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 09:29:30.519728   42664 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 09:29:30.549528   42664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:29:30.640939   42664 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 09:29:30.872225   42664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:29:30.900191   42664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:29:30.972076   42664 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0330 09:29:30.972195   42664 cli_runner.go:164] Run: docker exec -t old-k8s-version-331000 dig +short host.docker.internal
	I0330 09:29:31.092699   42664 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 09:29:31.092822   42664 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 09:29:31.097414   42664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:29:31.107597   42664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:29:31.168457   42664 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 09:29:31.168539   42664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:29:31.189550   42664 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0330 09:29:31.189574   42664 docker.go:569] Images already preloaded, skipping extraction
	I0330 09:29:31.189680   42664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:29:31.210974   42664 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0330 09:29:31.210993   42664 cache_images.go:84] Images are preloaded, skipping loading
	I0330 09:29:31.211108   42664 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 09:29:31.237611   42664 cni.go:84] Creating CNI manager for ""
	I0330 09:29:31.237629   42664 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 09:29:31.237647   42664 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0330 09:29:31.237666   42664 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-331000 NodeName:old-k8s-version-331000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 09:29:31.237791   42664 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-331000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-331000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 09:29:31.237881   42664 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-331000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-331000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0330 09:29:31.237944   42664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0330 09:29:31.246309   42664 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 09:29:31.246373   42664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 09:29:31.254286   42664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0330 09:29:31.267777   42664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0330 09:29:31.280945   42664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0330 09:29:31.294259   42664 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0330 09:29:31.298282   42664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:29:31.308439   42664 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000 for IP: 192.168.76.2
	I0330 09:29:31.308457   42664 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:29:31.308620   42664 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 09:29:31.308671   42664 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 09:29:31.308711   42664 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/client.key
	I0330 09:29:31.308727   42664 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/client.crt with IP's: []
	I0330 09:29:31.428414   42664 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/client.crt ...
	I0330 09:29:31.428426   42664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/client.crt: {Name:mk65e4cbf4477d3526bca1450688ad0b6947726f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:29:31.428734   42664 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/client.key ...
	I0330 09:29:31.428745   42664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/client.key: {Name:mkbd3cf09b0e2df71db6a082f0c5a57bc353d6a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:29:31.428945   42664 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.key.31bdca25
	I0330 09:29:31.428960   42664 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0330 09:29:31.868415   42664 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.crt.31bdca25 ...
	I0330 09:29:31.868437   42664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.crt.31bdca25: {Name:mk3733d0fa932234fe167beba04ea58bd7a5231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:29:31.868787   42664 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.key.31bdca25 ...
	I0330 09:29:31.868799   42664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.key.31bdca25: {Name:mkb723a7880d6710c7ee3ee38946a9ff822b5ea9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:29:31.869012   42664 certs.go:333] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.crt
	I0330 09:29:31.869185   42664 certs.go:337] copying /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.key
	I0330 09:29:31.869356   42664 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.key
	I0330 09:29:31.869373   42664 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.crt with IP's: []
	I0330 09:29:31.998299   42664 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.crt ...
	I0330 09:29:31.998313   42664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.crt: {Name:mk88414fe7fc2d889797f3827c158d24911f0fa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:29:31.998609   42664 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.key ...
	I0330 09:29:31.998622   42664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.key: {Name:mkb624882b987be5bcbaac72ff2957d3c90ba452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:29:31.999035   42664 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 09:29:31.999082   42664 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 09:29:31.999099   42664 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 09:29:31.999133   42664 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 09:29:31.999168   42664 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 09:29:31.999199   42664 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 09:29:31.999269   42664 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:29:31.999773   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 09:29:32.019348   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0330 09:29:32.037327   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 09:29:32.055165   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0330 09:29:32.073304   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 09:29:32.091521   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 09:29:32.109521   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 09:29:32.127152   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 09:29:32.145062   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 09:29:32.162990   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 09:29:32.181813   42664 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 09:29:32.199569   42664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 09:29:32.212998   42664 ssh_runner.go:195] Run: openssl version
	I0330 09:29:32.218877   42664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 09:29:32.227196   42664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:29:32.231166   42664 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:29:32.231212   42664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:29:32.236897   42664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 09:29:32.245377   42664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 09:29:32.253861   42664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 09:29:32.272354   42664 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 09:29:32.272413   42664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 09:29:32.278115   42664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 09:29:32.286438   42664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 09:29:32.295036   42664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 09:29:32.299195   42664 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 09:29:32.299239   42664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 09:29:32.304754   42664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 09:29:32.313092   42664 kubeadm.go:401] StartCluster: {Name:old-k8s-version-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-331000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:29:32.313193   42664 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:29:32.332562   42664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 09:29:32.341052   42664 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:29:32.349564   42664 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:29:32.349623   42664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:29:32.358050   42664 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:29:32.358080   42664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:29:32.413758   42664 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0330 09:29:32.413798   42664 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:29:32.587773   42664 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:29:32.587867   42664 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:29:32.587942   42664 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:29:32.747332   42664 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:29:32.748122   42664 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:29:32.754534   42664 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0330 09:29:32.828485   42664 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:29:32.850163   42664 out.go:204]   - Generating certificates and keys ...
	I0330 09:29:32.850256   42664 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:29:32.850333   42664 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:29:32.968995   42664 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0330 09:29:33.150535   42664 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0330 09:29:33.442734   42664 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0330 09:29:33.736156   42664 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0330 09:29:33.938772   42664 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0330 09:29:33.938881   42664 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-331000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0330 09:29:34.066403   42664 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0330 09:29:34.066529   42664 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-331000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0330 09:29:34.132383   42664 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0330 09:29:34.427492   42664 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0330 09:29:34.561151   42664 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0330 09:29:34.561206   42664 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:29:34.670189   42664 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:29:34.889689   42664 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:29:35.040075   42664 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:29:35.141125   42664 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:29:35.141836   42664 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:29:35.163269   42664 out.go:204]   - Booting up control plane ...
	I0330 09:29:35.163417   42664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:29:35.163588   42664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:29:35.163705   42664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:29:35.163853   42664 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:29:35.164109   42664 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:30:15.150328   42664 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 09:30:15.150967   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:30:15.151217   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:30:20.153763   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:30:20.153909   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:30:30.154359   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:30:30.154504   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:30:50.155144   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:30:50.155329   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:31:30.156271   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:31:30.156469   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:31:30.156482   42664 kubeadm.go:322] 
	I0330 09:31:30.156513   42664 kubeadm.go:322] Unfortunately, an error has occurred:
	I0330 09:31:30.156550   42664 kubeadm.go:322] 	timed out waiting for the condition
	I0330 09:31:30.156557   42664 kubeadm.go:322] 
	I0330 09:31:30.156596   42664 kubeadm.go:322] This error is likely caused by:
	I0330 09:31:30.156634   42664 kubeadm.go:322] 	- The kubelet is not running
	I0330 09:31:30.156739   42664 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 09:31:30.156753   42664 kubeadm.go:322] 
	I0330 09:31:30.156831   42664 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 09:31:30.156854   42664 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0330 09:31:30.156878   42664 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0330 09:31:30.156882   42664 kubeadm.go:322] 
	I0330 09:31:30.156955   42664 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 09:31:30.157027   42664 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0330 09:31:30.157094   42664 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0330 09:31:30.157141   42664 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0330 09:31:30.157201   42664 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0330 09:31:30.157231   42664 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0330 09:31:30.160198   42664 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 09:31:30.160294   42664 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 09:31:30.160431   42664 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0330 09:31:30.160505   42664 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:31:30.160596   42664 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 09:31:30.160659   42664 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0330 09:31:30.160796   42664 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-331000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-331000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-331000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-331000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0330 09:31:30.160829   42664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0330 09:31:30.574701   42664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:31:30.585699   42664 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:31:30.585756   42664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:31:30.593914   42664 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:31:30.593937   42664 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:31:30.648249   42664 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0330 09:31:30.648312   42664 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:31:30.827821   42664 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:31:30.827897   42664 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:31:30.827964   42664 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:31:30.991884   42664 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:31:30.992518   42664 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:31:30.999817   42664 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0330 09:31:31.074920   42664 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:31:31.100837   42664 out.go:204]   - Generating certificates and keys ...
	I0330 09:31:31.100954   42664 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:31:31.101051   42664 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:31:31.101132   42664 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 09:31:31.101184   42664 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 09:31:31.101260   42664 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 09:31:31.101350   42664 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 09:31:31.101411   42664 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 09:31:31.101455   42664 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 09:31:31.101545   42664 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 09:31:31.101616   42664 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 09:31:31.101646   42664 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 09:31:31.101708   42664 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:31:31.176786   42664 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:31:31.263675   42664 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:31:31.385623   42664 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:31:31.599691   42664 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:31:31.600400   42664 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:31:31.621041   42664 out.go:204]   - Booting up control plane ...
	I0330 09:31:31.621157   42664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:31:31.621247   42664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:31:31.621328   42664 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:31:31.621418   42664 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:31:31.621598   42664 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:32:11.609707   42664 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 09:32:11.610700   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:32:11.610929   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:32:16.611587   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:32:16.611739   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:32:26.613189   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:32:26.613345   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:32:46.614002   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:32:46.614195   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:33:26.599903   42664 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:33:26.600129   42664 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:33:26.600149   42664 kubeadm.go:322] 
	I0330 09:33:26.600206   42664 kubeadm.go:322] Unfortunately, an error has occurred:
	I0330 09:33:26.600306   42664 kubeadm.go:322] 	timed out waiting for the condition
	I0330 09:33:26.600323   42664 kubeadm.go:322] 
	I0330 09:33:26.600368   42664 kubeadm.go:322] This error is likely caused by:
	I0330 09:33:26.600409   42664 kubeadm.go:322] 	- The kubelet is not running
	I0330 09:33:26.600585   42664 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 09:33:26.600658   42664 kubeadm.go:322] 
	I0330 09:33:26.600792   42664 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 09:33:26.600833   42664 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0330 09:33:26.600868   42664 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0330 09:33:26.600876   42664 kubeadm.go:322] 
	I0330 09:33:26.600984   42664 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 09:33:26.601087   42664 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0330 09:33:26.601206   42664 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0330 09:33:26.601248   42664 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0330 09:33:26.601326   42664 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0330 09:33:26.601357   42664 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0330 09:33:26.604051   42664 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 09:33:26.604125   42664 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 09:33:26.604249   42664 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0330 09:33:26.604329   42664 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:33:26.604405   42664 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 09:33:26.604463   42664 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0330 09:33:26.604484   42664 kubeadm.go:403] StartCluster complete in 3m54.306239996s
	I0330 09:33:26.604590   42664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:33:26.625195   42664 logs.go:277] 0 containers: []
	W0330 09:33:26.625209   42664 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:33:26.625277   42664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:33:26.646293   42664 logs.go:277] 0 containers: []
	W0330 09:33:26.646306   42664 logs.go:279] No container was found matching "etcd"
	I0330 09:33:26.646378   42664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:33:26.665884   42664 logs.go:277] 0 containers: []
	W0330 09:33:26.665898   42664 logs.go:279] No container was found matching "coredns"
	I0330 09:33:26.665965   42664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:33:26.685549   42664 logs.go:277] 0 containers: []
	W0330 09:33:26.685563   42664 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:33:26.685636   42664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:33:26.706651   42664 logs.go:277] 0 containers: []
	W0330 09:33:26.706663   42664 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:33:26.706729   42664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:33:26.726595   42664 logs.go:277] 0 containers: []
	W0330 09:33:26.726608   42664 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:33:26.726680   42664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:33:26.747027   42664 logs.go:277] 0 containers: []
	W0330 09:33:26.747041   42664 logs.go:279] No container was found matching "kindnet"
	I0330 09:33:26.747048   42664 logs.go:123] Gathering logs for kubelet ...
	I0330 09:33:26.747056   42664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:33:26.787170   42664 logs.go:123] Gathering logs for dmesg ...
	I0330 09:33:26.787186   42664 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:33:26.800626   42664 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:33:26.800640   42664 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:33:26.856478   42664 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:33:26.856489   42664 logs.go:123] Gathering logs for Docker ...
	I0330 09:33:26.856497   42664 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:33:26.880491   42664 logs.go:123] Gathering logs for container status ...
	I0330 09:33:26.880505   42664 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:33:28.925736   42664 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04550699s)
	W0330 09:33:28.925849   42664 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0330 09:33:28.925863   42664 out.go:239] * 
	* 
	W0330 09:33:28.925970   42664 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 09:33:28.925984   42664 out.go:239] * 
	* 
	W0330 09:33:28.926572   42664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0330 09:33:29.005233   42664 out.go:177] 
	W0330 09:33:29.064426   42664 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 09:33:29.064526   42664 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0330 09:33:29.064586   42664 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0330 09:33:29.122403   42664 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-331000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-331000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-331000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a",
	        "Created": "2023-03-30T16:29:25.810047609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:29:26.166024196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a-json.log",
	        "Name": "/old-k8s-version-331000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-331000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-331000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-331000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-331000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-331000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c3400d6614f1fcfc40438330413bf45914c50ed0695474399ec3c8f922241a0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58898"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58899"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58900"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58901"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c3400d6614f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-331000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "521ae416df38",
	                        "old-k8s-version-331000"
	                    ],
	                    "NetworkID": "6fa3318f5503a289e050622f1a9f4b0bea1af814ad18d1b409b3b8c51f5769ef",
	                    "EndpointID": "aece95ad704ac274d6413327b529a3749e8ce01de608325aac7061bad1ca54c9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 6 (400.889411ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:33:29.678407   43874 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-331000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-331000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (252.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-331000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-331000 create -f testdata/busybox.yaml: exit status 1 (35.656295ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-331000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-331000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-331000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-331000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a",
	        "Created": "2023-03-30T16:29:25.810047609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:29:26.166024196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a-json.log",
	        "Name": "/old-k8s-version-331000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-331000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-331000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-331000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-331000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-331000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c3400d6614f1fcfc40438330413bf45914c50ed0695474399ec3c8f922241a0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58898"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58899"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58900"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58901"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c3400d6614f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-331000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "521ae416df38",
	                        "old-k8s-version-331000"
	                    ],
	                    "NetworkID": "6fa3318f5503a289e050622f1a9f4b0bea1af814ad18d1b409b3b8c51f5769ef",
	                    "EndpointID": "aece95ad704ac274d6413327b529a3749e8ce01de608325aac7061bad1ca54c9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 6 (404.411331ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:33:30.181645   43887 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-331000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-331000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-331000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-331000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a",
	        "Created": "2023-03-30T16:29:25.810047609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:29:26.166024196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a-json.log",
	        "Name": "/old-k8s-version-331000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-331000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-331000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-331000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-331000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-331000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c3400d6614f1fcfc40438330413bf45914c50ed0695474399ec3c8f922241a0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58898"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58899"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58900"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58901"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c3400d6614f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-331000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "521ae416df38",
	                        "old-k8s-version-331000"
	                    ],
	                    "NetworkID": "6fa3318f5503a289e050622f1a9f4b0bea1af814ad18d1b409b3b8c51f5769ef",
	                    "EndpointID": "aece95ad704ac274d6413327b529a3749e8ce01de608325aac7061bad1ca54c9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 6 (397.439889ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:33:30.641060   43899 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-331000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-331000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-331000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0330 09:33:36.931176   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:33:37.807754   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:33:38.066454   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:38.072896   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:38.083454   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:38.105632   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:38.146895   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:38.227076   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:38.387326   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:38.678829   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:38.707581   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:39.348557   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:40.628751   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:43.189340   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:48.309405   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:58.549363   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:33:59.178317   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:34:13.417696   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:34:19.029384   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:34:29.741365   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:34:40.138700   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:34:41.170671   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:34:48.283427   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:48.289862   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:48.300674   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:48.321582   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:48.362655   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:48.442999   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:48.603450   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:48.924062   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:49.564243   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:50.844622   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:53.405746   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:58.526018   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:34:59.727003   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:34:59.989941   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:35:08.766158   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:35:18.787828   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-331000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m49.271391604s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-331000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-331000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-331000 describe deploy/metrics-server -n kube-system: exit status 1 (36.673603ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-331000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-331000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-331000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-331000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a",
	        "Created": "2023-03-30T16:29:25.810047609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:29:26.166024196Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a-json.log",
	        "Name": "/old-k8s-version-331000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-331000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-331000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-331000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-331000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-331000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c3400d6614f1fcfc40438330413bf45914c50ed0695474399ec3c8f922241a0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58898"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58899"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58900"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58901"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c3400d6614f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-331000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "521ae416df38",
	                        "old-k8s-version-331000"
	                    ],
	                    "NetworkID": "6fa3318f5503a289e050622f1a9f4b0bea1af814ad18d1b409b3b8c51f5769ef",
	                    "EndpointID": "aece95ad704ac274d6413327b529a3749e8ce01de608325aac7061bad1ca54c9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 6 (399.090607ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:35:20.411917   44033 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-331000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-331000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (497.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-331000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0330 09:35:29.246622   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:35:46.472169   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:35:53.081721   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:35:55.321612   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:36:02.060736   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:36:10.207021   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:36:12.270760   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:36:17.954473   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 09:36:20.770549   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:36:21.911007   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:36:45.895630   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-331000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m12.656257577s)

                                                
                                                
-- stdout --
	* [old-k8s-version-331000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-331000 in cluster old-k8s-version-331000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-331000" ...
	* Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 09:35:22.473006   44075 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:35:22.473187   44075 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:35:22.473192   44075 out.go:309] Setting ErrFile to fd 2...
	I0330 09:35:22.473196   44075 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:35:22.473313   44075 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:35:22.474830   44075 out.go:303] Setting JSON to false
	I0330 09:35:22.495608   44075 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9290,"bootTime":1680184832,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 09:35:22.495836   44075 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 09:35:22.517369   44075 out.go:177] * [old-k8s-version-331000] minikube v1.29.0 on Darwin 13.3
	I0330 09:35:22.559385   44075 notify.go:220] Checking for updates...
	I0330 09:35:22.582358   44075 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 09:35:22.624407   44075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:35:22.645591   44075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 09:35:22.666437   44075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 09:35:22.708404   44075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 09:35:22.753247   44075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 09:35:22.775085   44075 config.go:182] Loaded profile config "old-k8s-version-331000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0330 09:35:22.814418   44075 out.go:177] * Kubernetes 1.26.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.3
	I0330 09:35:22.835363   44075 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 09:35:22.900240   44075 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 09:35:22.900371   44075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:35:23.089037   44075 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-30 16:35:22.953794227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:35:23.132540   44075 out.go:177] * Using the docker driver based on existing profile
	I0330 09:35:23.153565   44075 start.go:295] selected driver: docker
	I0330 09:35:23.153590   44075 start.go:859] validating driver "docker" against &{Name:old-k8s-version-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-331000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:35:23.153708   44075 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 09:35:23.157816   44075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:35:23.346866   44075 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-30 16:35:23.210986063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:35:23.347016   44075 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0330 09:35:23.347035   44075 cni.go:84] Creating CNI manager for ""
	I0330 09:35:23.347049   44075 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 09:35:23.347058   44075 start_flags.go:319] config:
	{Name:old-k8s-version-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-331000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:35:23.369034   44075 out.go:177] * Starting control plane node old-k8s-version-331000 in cluster old-k8s-version-331000
	I0330 09:35:23.390715   44075 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 09:35:23.412611   44075 out.go:177] * Pulling base image ...
	I0330 09:35:23.455574   44075 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 09:35:23.455603   44075 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 09:35:23.455696   44075 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0330 09:35:23.455721   44075 cache.go:57] Caching tarball of preloaded images
	I0330 09:35:23.455979   44075 preload.go:174] Found /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0330 09:35:23.456005   44075 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0330 09:35:23.456998   44075 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/config.json ...
	I0330 09:35:23.517397   44075 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 09:35:23.517416   44075 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 09:35:23.517437   44075 cache.go:193] Successfully downloaded all kic artifacts
	I0330 09:35:23.517478   44075 start.go:364] acquiring machines lock for old-k8s-version-331000: {Name:mk68a72133bfb0ba0e52354dae23a3d4710ac349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 09:35:23.517573   44075 start.go:368] acquired machines lock for "old-k8s-version-331000" in 74.53µs
	I0330 09:35:23.517598   44075 start.go:96] Skipping create...Using existing machine configuration
	I0330 09:35:23.517608   44075 fix.go:55] fixHost starting: 
	I0330 09:35:23.517848   44075 cli_runner.go:164] Run: docker container inspect old-k8s-version-331000 --format={{.State.Status}}
	I0330 09:35:23.578506   44075 fix.go:103] recreateIfNeeded on old-k8s-version-331000: state=Stopped err=<nil>
	W0330 09:35:23.578534   44075 fix.go:129] unexpected machine state, will restart: <nil>
	I0330 09:35:23.600497   44075 out.go:177] * Restarting existing docker container for "old-k8s-version-331000" ...
	I0330 09:35:23.622070   44075 cli_runner.go:164] Run: docker start old-k8s-version-331000
	I0330 09:35:23.991944   44075 cli_runner.go:164] Run: docker container inspect old-k8s-version-331000 --format={{.State.Status}}
	I0330 09:35:24.056828   44075 kic.go:426] container "old-k8s-version-331000" state is running.
	I0330 09:35:24.057393   44075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-331000
	I0330 09:35:24.137811   44075 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/config.json ...
	I0330 09:35:24.138248   44075 machine.go:88] provisioning docker machine ...
	I0330 09:35:24.138271   44075 ubuntu.go:169] provisioning hostname "old-k8s-version-331000"
	I0330 09:35:24.138347   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:24.215323   44075 main.go:141] libmachine: Using SSH client type: native
	I0330 09:35:24.215719   44075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59135 <nil> <nil>}
	I0330 09:35:24.215732   44075 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-331000 && echo "old-k8s-version-331000" | sudo tee /etc/hostname
	I0330 09:35:24.346313   44075 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-331000
	
	I0330 09:35:24.346460   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:24.415074   44075 main.go:141] libmachine: Using SSH client type: native
	I0330 09:35:24.415645   44075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59135 <nil> <nil>}
	I0330 09:35:24.415679   44075 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-331000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-331000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-331000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 09:35:24.535258   44075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:35:24.535280   44075 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 09:35:24.535306   44075 ubuntu.go:177] setting up certificates
	I0330 09:35:24.535313   44075 provision.go:83] configureAuth start
	I0330 09:35:24.535388   44075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-331000
	I0330 09:35:24.600840   44075 provision.go:138] copyHostCerts
	I0330 09:35:24.600946   44075 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 09:35:24.600957   44075 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 09:35:24.601073   44075 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 09:35:24.601285   44075 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 09:35:24.601292   44075 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 09:35:24.601368   44075 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 09:35:24.601522   44075 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 09:35:24.601528   44075 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 09:35:24.601586   44075 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 09:35:24.601709   44075 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-331000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-331000]
	I0330 09:35:24.711403   44075 provision.go:172] copyRemoteCerts
	I0330 09:35:24.711467   44075 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 09:35:24.711531   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:24.774811   44075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59135 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:35:24.862064   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 09:35:24.880378   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0330 09:35:24.898297   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0330 09:35:24.916179   44075 provision.go:86] duration metric: configureAuth took 380.851356ms
	I0330 09:35:24.916194   44075 ubuntu.go:193] setting minikube options for container-runtime
	I0330 09:35:24.916379   44075 config.go:182] Loaded profile config "old-k8s-version-331000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0330 09:35:24.916443   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:24.977290   44075 main.go:141] libmachine: Using SSH client type: native
	I0330 09:35:24.977650   44075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59135 <nil> <nil>}
	I0330 09:35:24.977662   44075 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 09:35:25.095281   44075 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 09:35:25.095295   44075 ubuntu.go:71] root file system type: overlay
	I0330 09:35:25.095402   44075 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 09:35:25.095487   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:25.156448   44075 main.go:141] libmachine: Using SSH client type: native
	I0330 09:35:25.156796   44075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59135 <nil> <nil>}
	I0330 09:35:25.156856   44075 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 09:35:25.284576   44075 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 09:35:25.284671   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:25.346104   44075 main.go:141] libmachine: Using SSH client type: native
	I0330 09:35:25.346449   44075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59135 <nil> <nil>}
	I0330 09:35:25.346470   44075 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 09:35:25.469034   44075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:35:25.469051   44075 machine.go:91] provisioned docker machine in 1.330793395s
	I0330 09:35:25.469062   44075 start.go:300] post-start starting for "old-k8s-version-331000" (driver="docker")
	I0330 09:35:25.469068   44075 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 09:35:25.469156   44075 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 09:35:25.469219   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:25.530039   44075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59135 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:35:25.614747   44075 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 09:35:25.618476   44075 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 09:35:25.618492   44075 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 09:35:25.618499   44075 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 09:35:25.618503   44075 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 09:35:25.618511   44075 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 09:35:25.618601   44075 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 09:35:25.618765   44075 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 09:35:25.618953   44075 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 09:35:25.626549   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:35:25.644377   44075 start.go:303] post-start completed in 175.297827ms
	I0330 09:35:25.644458   44075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:35:25.644515   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:25.705978   44075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59135 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:35:25.791859   44075 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 09:35:25.797129   44075 fix.go:57] fixHost completed within 2.279515069s
	I0330 09:35:25.797148   44075 start.go:83] releasing machines lock for "old-k8s-version-331000", held for 2.279565053s
	I0330 09:35:25.797245   44075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-331000
	I0330 09:35:25.857896   44075 ssh_runner.go:195] Run: cat /version.json
	I0330 09:35:25.857947   44075 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0330 09:35:25.857969   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:25.858020   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:25.922584   44075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59135 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:35:25.922767   44075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59135 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/old-k8s-version-331000/id_rsa Username:docker}
	I0330 09:35:26.276417   44075 ssh_runner.go:195] Run: systemctl --version
	I0330 09:35:26.281152   44075 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0330 09:35:26.285748   44075 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0330 09:35:26.285800   44075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0330 09:35:26.293422   44075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0330 09:35:26.301076   44075 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0330 09:35:26.301089   44075 start.go:481] detecting cgroup driver to use...
	I0330 09:35:26.301099   44075 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:35:26.301174   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:35:26.315061   44075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0330 09:35:26.323997   44075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 09:35:26.333800   44075 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 09:35:26.333873   44075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 09:35:26.343900   44075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:35:26.352770   44075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 09:35:26.361504   44075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:35:26.370134   44075 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 09:35:26.378087   44075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 09:35:26.386780   44075 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 09:35:26.394300   44075 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 09:35:26.401601   44075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:35:26.479663   44075 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 09:35:26.561554   44075 start.go:481] detecting cgroup driver to use...
	I0330 09:35:26.561578   44075 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:35:26.561641   44075 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 09:35:26.572193   44075 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 09:35:26.572257   44075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 09:35:26.582882   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:35:26.597478   44075 ssh_runner.go:195] Run: which cri-dockerd
	I0330 09:35:26.601948   44075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 09:35:26.611858   44075 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (184 bytes)
	I0330 09:35:26.631877   44075 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 09:35:26.732412   44075 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 09:35:26.794156   44075 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 09:35:26.794175   44075 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 09:35:26.829977   44075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:35:26.900227   44075 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 09:35:27.123184   44075 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:35:27.149226   44075 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:35:27.219421   44075 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 23.0.1 ...
	I0330 09:35:27.219602   44075 cli_runner.go:164] Run: docker exec -t old-k8s-version-331000 dig +short host.docker.internal
	I0330 09:35:27.344850   44075 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 09:35:27.344974   44075 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 09:35:27.349579   44075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:35:27.359728   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:27.421674   44075 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 09:35:27.421758   44075 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:35:27.442374   44075 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0330 09:35:27.442397   44075 docker.go:569] Images already preloaded, skipping extraction
	I0330 09:35:27.442482   44075 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:35:27.462854   44075 docker.go:639] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0330 09:35:27.462874   44075 cache_images.go:84] Images are preloaded, skipping loading
	I0330 09:35:27.462952   44075 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 09:35:27.490220   44075 cni.go:84] Creating CNI manager for ""
	I0330 09:35:27.494786   44075 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 09:35:27.494803   44075 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0330 09:35:27.494829   44075 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-331000 NodeName:old-k8s-version-331000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 09:35:27.494939   44075 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-331000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-331000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 09:35:27.495009   44075 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-331000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-331000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0330 09:35:27.495073   44075 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0330 09:35:27.503449   44075 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 09:35:27.503579   44075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 09:35:27.511938   44075 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0330 09:35:27.525272   44075 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0330 09:35:27.538771   44075 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0330 09:35:27.552798   44075 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0330 09:35:27.557201   44075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:35:27.567377   44075 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000 for IP: 192.168.76.2
	I0330 09:35:27.567395   44075 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:35:27.567574   44075 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 09:35:27.567637   44075 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 09:35:27.567727   44075 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/client.key
	I0330 09:35:27.567817   44075 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.key.31bdca25
	I0330 09:35:27.567875   44075 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.key
	I0330 09:35:27.568080   44075 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 09:35:27.568125   44075 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 09:35:27.568139   44075 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 09:35:27.568173   44075 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 09:35:27.568212   44075 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 09:35:27.568264   44075 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 09:35:27.568330   44075 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:35:27.568930   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 09:35:27.586941   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0330 09:35:27.604887   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 09:35:27.623795   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/old-k8s-version-331000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0330 09:35:27.641775   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 09:35:27.659795   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 09:35:27.677718   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 09:35:27.695714   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 09:35:27.713946   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 09:35:27.731967   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 09:35:27.749902   44075 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 09:35:27.768088   44075 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 09:35:27.781271   44075 ssh_runner.go:195] Run: openssl version
	I0330 09:35:27.786800   44075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 09:35:27.794988   44075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 09:35:27.799086   44075 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 09:35:27.799128   44075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 09:35:27.804624   44075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 09:35:27.812547   44075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 09:35:27.821058   44075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 09:35:27.825366   44075 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 09:35:27.825420   44075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 09:35:27.831260   44075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 09:35:27.839148   44075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 09:35:27.847358   44075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:35:27.851523   44075 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:35:27.851562   44075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:35:27.857071   44075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 09:35:27.864620   44075 kubeadm.go:401] StartCluster: {Name:old-k8s-version-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-331000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:35:27.864732   44075 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:35:27.885862   44075 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 09:35:27.894235   44075 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0330 09:35:27.894252   44075 kubeadm.go:633] restartCluster start
	I0330 09:35:27.894304   44075 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0330 09:35:27.901753   44075 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:27.901824   44075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-331000
	I0330 09:35:27.964752   44075 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-331000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:35:27.964922   44075 kubeconfig.go:146] "old-k8s-version-331000" context is missing from /Users/jenkins/minikube-integration/16199-24978/kubeconfig - will repair!
	I0330 09:35:27.965247   44075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:35:27.966890   44075 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0330 09:35:27.974884   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:27.974951   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:27.984000   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:28.485324   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:28.485516   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:28.497059   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:28.985219   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:28.985398   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:28.996916   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:29.486131   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:29.486275   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:29.497560   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:29.985433   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:29.985584   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:29.996910   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:30.486144   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:30.486333   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:30.497581   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:30.984452   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:30.984583   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:30.995782   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:31.484987   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:31.485161   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:31.496772   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:31.986129   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:31.986281   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:31.997699   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:32.484530   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:32.496963   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:32.508057   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:32.985063   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:32.985235   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:32.996565   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:33.485209   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:33.485338   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:33.496833   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:33.986148   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:33.986323   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:33.997664   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:34.484341   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:34.484487   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:34.494578   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:34.986265   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:34.986375   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:34.997470   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:35.486245   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:35.486366   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:35.496707   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:35.986134   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:35.986301   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:35.998033   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:36.485507   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:36.485653   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:36.496663   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:36.984538   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:36.984667   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:36.995857   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:37.486131   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:37.496681   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:37.508625   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:37.985044   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:37.985176   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:37.996326   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:37.996336   44075 api_server.go:165] Checking apiserver status ...
	I0330 09:35:37.996385   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:35:38.005275   44075 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:35:38.005288   44075 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0330 09:35:38.005296   44075 kubeadm.go:1120] stopping kube-system containers ...
	I0330 09:35:38.005377   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:35:38.026298   44075 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0330 09:35:38.037062   44075 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:35:38.045083   44075 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Mar 30 16:31 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Mar 30 16:31 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Mar 30 16:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Mar 30 16:31 /etc/kubernetes/scheduler.conf
	
	I0330 09:35:38.045147   44075 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0330 09:35:38.052820   44075 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0330 09:35:38.060577   44075 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0330 09:35:38.068250   44075 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0330 09:35:38.076254   44075 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:35:38.084299   44075 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0330 09:35:38.084311   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:35:38.138174   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:35:38.938212   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:35:39.105104   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:35:39.162220   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:35:39.242149   44075 api_server.go:51] waiting for apiserver process to appear ...
	I0330 09:35:39.242221   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:39.751496   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:40.253476   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:40.752184   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:41.252089   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:41.753416   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:42.253171   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:42.753471   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:43.252516   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:43.751898   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:44.251619   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:44.751694   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:45.253464   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:45.751907   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:46.251777   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:46.751343   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:47.251799   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:47.752517   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:48.252415   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:48.751536   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:49.251480   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:49.753440   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:50.253398   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:50.753396   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:51.251321   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:51.751893   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:52.251860   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:52.751393   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:53.251928   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:53.751397   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:54.251672   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:54.751783   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:55.252490   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:55.753426   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:56.253257   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:56.752837   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:57.251290   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:57.751792   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:58.253442   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:58.751721   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:59.251425   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:35:59.753443   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:00.251464   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:00.751413   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:01.252299   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:01.753083   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:02.253442   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:02.753426   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:03.251984   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:03.751390   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:04.251541   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:04.752830   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:05.252683   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:05.751918   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:06.253482   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:06.753465   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:07.252477   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:07.752142   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:08.251881   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:08.752403   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:09.252863   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:09.753047   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:10.253468   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:10.752989   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:11.251557   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:11.751603   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:12.253445   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:12.753451   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:13.251672   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:13.751946   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:14.252327   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:14.751810   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:15.251547   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:15.751386   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:16.252998   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:16.751414   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:17.251415   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:17.752097   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:18.253512   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:18.751679   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:19.252690   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:19.753438   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:20.253445   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:20.753290   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:21.253223   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:21.752303   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:22.252031   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:22.752327   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:23.253451   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:23.751363   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:24.252139   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:24.753510   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:25.253038   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:25.752117   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:26.252036   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:26.751401   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:27.251332   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:27.751432   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:28.251805   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:28.751493   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:29.251329   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:29.751337   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:30.251644   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:30.751339   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:31.253330   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:31.751879   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:32.251956   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:32.752876   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:33.251580   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:33.751915   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:34.251527   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:34.752004   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:35.251376   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:35.751428   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:36.251481   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:36.751672   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:37.252057   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:37.753473   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:38.251344   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:38.751426   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:39.252938   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:36:39.274916   44075 logs.go:277] 0 containers: []
	W0330 09:36:39.274929   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:36:39.274996   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:36:39.294428   44075 logs.go:277] 0 containers: []
	W0330 09:36:39.294454   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:36:39.294535   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:36:39.314279   44075 logs.go:277] 0 containers: []
	W0330 09:36:39.314292   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:36:39.314360   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:36:39.335121   44075 logs.go:277] 0 containers: []
	W0330 09:36:39.335135   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:36:39.335204   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:36:39.355360   44075 logs.go:277] 0 containers: []
	W0330 09:36:39.355374   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:36:39.355447   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:36:39.377145   44075 logs.go:277] 0 containers: []
	W0330 09:36:39.377158   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:36:39.377228   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:36:39.396990   44075 logs.go:277] 0 containers: []
	W0330 09:36:39.397002   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:36:39.397072   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:36:39.417113   44075 logs.go:277] 0 containers: []
	W0330 09:36:39.417126   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:36:39.417133   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:36:39.417140   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:36:41.463929   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04677425s)
	I0330 09:36:41.464088   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:36:41.464096   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:36:41.502106   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:36:41.502123   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:36:41.515273   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:36:41.515286   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:36:41.570931   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:36:41.570948   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:36:41.570958   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:36:44.092969   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:44.253524   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:36:44.275296   44075 logs.go:277] 0 containers: []
	W0330 09:36:44.275309   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:36:44.275386   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:36:44.294895   44075 logs.go:277] 0 containers: []
	W0330 09:36:44.294908   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:36:44.294983   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:36:44.314998   44075 logs.go:277] 0 containers: []
	W0330 09:36:44.315013   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:36:44.315086   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:36:44.335562   44075 logs.go:277] 0 containers: []
	W0330 09:36:44.335575   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:36:44.335648   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:36:44.357256   44075 logs.go:277] 0 containers: []
	W0330 09:36:44.357271   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:36:44.357343   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:36:44.378918   44075 logs.go:277] 0 containers: []
	W0330 09:36:44.378933   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:36:44.379004   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:36:44.399377   44075 logs.go:277] 0 containers: []
	W0330 09:36:44.399390   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:36:44.399461   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:36:44.437713   44075 logs.go:277] 0 containers: []
	W0330 09:36:44.437727   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:36:44.437735   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:36:44.437742   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:36:44.459320   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:36:44.459333   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:36:46.504344   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044994689s)
	I0330 09:36:46.504500   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:36:46.504510   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:36:46.540919   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:36:46.540935   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:36:46.553794   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:36:46.553809   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:36:46.610979   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:36:49.112422   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:49.251868   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:36:49.273129   44075 logs.go:277] 0 containers: []
	W0330 09:36:49.273142   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:36:49.273217   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:36:49.292884   44075 logs.go:277] 0 containers: []
	W0330 09:36:49.292897   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:36:49.292973   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:36:49.314020   44075 logs.go:277] 0 containers: []
	W0330 09:36:49.314036   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:36:49.314108   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:36:49.335662   44075 logs.go:277] 0 containers: []
	W0330 09:36:49.335679   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:36:49.335777   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:36:49.356460   44075 logs.go:277] 0 containers: []
	W0330 09:36:49.356475   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:36:49.356545   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:36:49.376121   44075 logs.go:277] 0 containers: []
	W0330 09:36:49.376135   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:36:49.376208   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:36:49.397107   44075 logs.go:277] 0 containers: []
	W0330 09:36:49.397121   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:36:49.397196   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:36:49.416669   44075 logs.go:277] 0 containers: []
	W0330 09:36:49.416682   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:36:49.416689   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:36:49.416700   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:36:49.438449   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:36:49.438468   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:36:51.487910   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049422224s)
	I0330 09:36:51.488017   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:36:51.488027   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:36:51.530355   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:36:51.530383   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:36:51.553459   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:36:51.553482   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:36:51.612128   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:36:54.112244   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:54.251492   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:36:54.275594   44075 logs.go:277] 0 containers: []
	W0330 09:36:54.275612   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:36:54.275682   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:36:54.295015   44075 logs.go:277] 0 containers: []
	W0330 09:36:54.295027   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:36:54.295098   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:36:54.313756   44075 logs.go:277] 0 containers: []
	W0330 09:36:54.313769   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:36:54.313837   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:36:54.336726   44075 logs.go:277] 0 containers: []
	W0330 09:36:54.336747   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:36:54.336861   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:36:54.364212   44075 logs.go:277] 0 containers: []
	W0330 09:36:54.364223   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:36:54.364321   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:36:54.388245   44075 logs.go:277] 0 containers: []
	W0330 09:36:54.388263   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:36:54.388331   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:36:54.413962   44075 logs.go:277] 0 containers: []
	W0330 09:36:54.413975   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:36:54.414040   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:36:54.434259   44075 logs.go:277] 0 containers: []
	W0330 09:36:54.434271   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:36:54.434278   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:36:54.434285   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:36:54.472077   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:36:54.472094   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:36:54.484942   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:36:54.484958   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:36:54.547020   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:36:54.547036   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:36:54.547043   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:36:54.570408   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:36:54.570425   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:36:56.621488   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051049107s)
	I0330 09:36:59.121821   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:36:59.251483   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:36:59.274888   44075 logs.go:277] 0 containers: []
	W0330 09:36:59.274902   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:36:59.274970   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:36:59.294615   44075 logs.go:277] 0 containers: []
	W0330 09:36:59.294629   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:36:59.294702   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:36:59.313774   44075 logs.go:277] 0 containers: []
	W0330 09:36:59.313787   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:36:59.313861   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:36:59.339143   44075 logs.go:277] 0 containers: []
	W0330 09:36:59.339161   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:36:59.339245   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:36:59.362359   44075 logs.go:277] 0 containers: []
	W0330 09:36:59.362377   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:36:59.362473   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:36:59.389205   44075 logs.go:277] 0 containers: []
	W0330 09:36:59.389219   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:36:59.389307   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:36:59.421125   44075 logs.go:277] 0 containers: []
	W0330 09:36:59.421148   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:36:59.421239   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:36:59.446605   44075 logs.go:277] 0 containers: []
	W0330 09:36:59.446625   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:36:59.446635   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:36:59.446645   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:36:59.508183   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:36:59.508195   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:36:59.508204   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:36:59.531216   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:36:59.531236   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:01.579693   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04844349s)
	I0330 09:37:01.579812   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:01.579820   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:01.617676   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:01.617696   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:04.131309   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:04.251534   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:04.271370   44075 logs.go:277] 0 containers: []
	W0330 09:37:04.271384   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:04.271479   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:04.290904   44075 logs.go:277] 0 containers: []
	W0330 09:37:04.290917   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:04.290990   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:04.310469   44075 logs.go:277] 0 containers: []
	W0330 09:37:04.310483   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:04.310554   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:04.331057   44075 logs.go:277] 0 containers: []
	W0330 09:37:04.331071   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:04.331143   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:04.354278   44075 logs.go:277] 0 containers: []
	W0330 09:37:04.354303   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:04.354383   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:04.374979   44075 logs.go:277] 0 containers: []
	W0330 09:37:04.374995   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:04.375078   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:04.395682   44075 logs.go:277] 0 containers: []
	W0330 09:37:04.395698   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:04.395772   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:04.415039   44075 logs.go:277] 0 containers: []
	W0330 09:37:04.415052   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:04.415059   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:04.415067   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:04.457677   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:04.457699   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:04.472613   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:04.472641   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:04.532405   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:04.532421   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:04.532436   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:04.556129   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:04.556146   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:06.602317   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046139246s)
	I0330 09:37:09.104599   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:09.252149   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:09.274527   44075 logs.go:277] 0 containers: []
	W0330 09:37:09.274543   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:09.274611   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:09.293687   44075 logs.go:277] 0 containers: []
	W0330 09:37:09.293700   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:09.293773   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:09.314208   44075 logs.go:277] 0 containers: []
	W0330 09:37:09.314226   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:09.314297   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:09.333585   44075 logs.go:277] 0 containers: []
	W0330 09:37:09.333598   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:09.333678   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:09.354765   44075 logs.go:277] 0 containers: []
	W0330 09:37:09.354779   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:09.354856   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:09.374672   44075 logs.go:277] 0 containers: []
	W0330 09:37:09.374686   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:09.374762   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:09.421340   44075 logs.go:277] 0 containers: []
	W0330 09:37:09.421364   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:09.421490   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:09.442962   44075 logs.go:277] 0 containers: []
	W0330 09:37:09.442977   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:09.442985   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:09.442993   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:09.481102   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:09.481131   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:09.493502   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:09.493515   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:09.551530   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:09.551547   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:09.551562   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:09.573223   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:09.573244   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:11.621133   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047873883s)
	I0330 09:37:14.122552   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:14.253012   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:14.274662   44075 logs.go:277] 0 containers: []
	W0330 09:37:14.274677   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:14.274757   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:14.295539   44075 logs.go:277] 0 containers: []
	W0330 09:37:14.295552   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:14.295624   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:14.315662   44075 logs.go:277] 0 containers: []
	W0330 09:37:14.315675   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:14.315749   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:14.334995   44075 logs.go:277] 0 containers: []
	W0330 09:37:14.335013   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:14.335087   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:14.354860   44075 logs.go:277] 0 containers: []
	W0330 09:37:14.354872   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:14.354943   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:14.375185   44075 logs.go:277] 0 containers: []
	W0330 09:37:14.375198   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:14.375271   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:14.394961   44075 logs.go:277] 0 containers: []
	W0330 09:37:14.394973   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:14.395042   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:14.415097   44075 logs.go:277] 0 containers: []
	W0330 09:37:14.415110   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:14.415118   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:14.415127   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:14.452126   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:14.452143   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:14.464886   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:14.464901   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:14.521025   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:14.521042   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:14.521049   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:14.542074   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:14.542088   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:16.585961   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043858968s)
	I0330 09:37:19.087743   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:19.251642   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:19.272832   44075 logs.go:277] 0 containers: []
	W0330 09:37:19.272845   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:19.272898   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:19.294202   44075 logs.go:277] 0 containers: []
	W0330 09:37:19.294215   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:19.294284   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:19.316648   44075 logs.go:277] 0 containers: []
	W0330 09:37:19.316662   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:19.316729   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:19.341641   44075 logs.go:277] 0 containers: []
	W0330 09:37:19.341653   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:19.341717   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:19.363014   44075 logs.go:277] 0 containers: []
	W0330 09:37:19.363026   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:19.363098   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:19.384576   44075 logs.go:277] 0 containers: []
	W0330 09:37:19.384589   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:19.384653   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:19.407240   44075 logs.go:277] 0 containers: []
	W0330 09:37:19.407252   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:19.407318   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:19.430612   44075 logs.go:277] 0 containers: []
	W0330 09:37:19.430626   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:19.430633   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:19.430642   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:21.480555   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049897099s)
	I0330 09:37:21.480700   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:21.480707   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:21.522026   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:21.522042   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:21.534769   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:21.534788   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:21.597164   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:21.597176   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:21.597184   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:24.120181   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:24.251987   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:24.273206   44075 logs.go:277] 0 containers: []
	W0330 09:37:24.273219   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:24.273277   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:24.294769   44075 logs.go:277] 0 containers: []
	W0330 09:37:24.294781   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:24.294853   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:24.315395   44075 logs.go:277] 0 containers: []
	W0330 09:37:24.315407   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:24.315460   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:24.336387   44075 logs.go:277] 0 containers: []
	W0330 09:37:24.336399   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:24.336461   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:24.358580   44075 logs.go:277] 0 containers: []
	W0330 09:37:24.358593   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:24.358647   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:24.379356   44075 logs.go:277] 0 containers: []
	W0330 09:37:24.379369   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:24.379446   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:24.401216   44075 logs.go:277] 0 containers: []
	W0330 09:37:24.401230   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:24.401300   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:24.425211   44075 logs.go:277] 0 containers: []
	W0330 09:37:24.425225   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:24.425233   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:24.425240   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:24.464417   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:24.464430   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:24.478057   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:24.478072   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:24.541151   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:24.541163   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:24.541171   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:24.565430   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:24.565450   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:26.617216   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051744808s)
	I0330 09:37:29.117485   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:29.251582   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:29.282830   44075 logs.go:277] 0 containers: []
	W0330 09:37:29.282844   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:29.282917   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:29.308048   44075 logs.go:277] 0 containers: []
	W0330 09:37:29.308065   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:29.308156   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:29.330114   44075 logs.go:277] 0 containers: []
	W0330 09:37:29.330128   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:29.330203   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:29.354015   44075 logs.go:277] 0 containers: []
	W0330 09:37:29.354029   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:29.354104   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:29.381920   44075 logs.go:277] 0 containers: []
	W0330 09:37:29.381934   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:29.382013   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:29.407881   44075 logs.go:277] 0 containers: []
	W0330 09:37:29.407896   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:29.407981   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:29.432107   44075 logs.go:277] 0 containers: []
	W0330 09:37:29.432120   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:29.432193   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:29.464828   44075 logs.go:277] 0 containers: []
	W0330 09:37:29.464844   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:29.464853   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:29.464866   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:29.496003   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:29.496029   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:31.548209   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052163825s)
	I0330 09:37:31.548328   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:31.548336   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:31.599052   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:31.599073   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:31.617179   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:31.617198   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:31.685905   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:34.188034   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:34.251616   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:34.279107   44075 logs.go:277] 0 containers: []
	W0330 09:37:34.279128   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:34.279225   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:34.309452   44075 logs.go:277] 0 containers: []
	W0330 09:37:34.309465   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:34.309552   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:34.336732   44075 logs.go:277] 0 containers: []
	W0330 09:37:34.336748   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:34.336839   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:34.366091   44075 logs.go:277] 0 containers: []
	W0330 09:37:34.366118   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:34.366210   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:34.388803   44075 logs.go:277] 0 containers: []
	W0330 09:37:34.388837   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:34.388973   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:34.432079   44075 logs.go:277] 0 containers: []
	W0330 09:37:34.432096   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:34.432189   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:34.457862   44075 logs.go:277] 0 containers: []
	W0330 09:37:34.457881   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:34.457967   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:34.488201   44075 logs.go:277] 0 containers: []
	W0330 09:37:34.488221   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:34.488231   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:34.488244   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:34.560635   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:34.560663   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:34.585573   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:34.585597   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:34.682741   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:34.682755   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:34.682763   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:34.705976   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:34.705995   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:36.764910   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058899408s)
	I0330 09:37:39.265280   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:39.753456   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:39.774455   44075 logs.go:277] 0 containers: []
	W0330 09:37:39.774468   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:39.774540   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:39.795266   44075 logs.go:277] 0 containers: []
	W0330 09:37:39.795280   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:39.795356   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:39.814600   44075 logs.go:277] 0 containers: []
	W0330 09:37:39.814613   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:39.814674   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:39.834391   44075 logs.go:277] 0 containers: []
	W0330 09:37:39.834405   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:39.834476   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:39.856962   44075 logs.go:277] 0 containers: []
	W0330 09:37:39.856976   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:39.857051   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:39.880821   44075 logs.go:277] 0 containers: []
	W0330 09:37:39.880837   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:39.880911   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:39.906158   44075 logs.go:277] 0 containers: []
	W0330 09:37:39.906172   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:39.906251   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:39.928741   44075 logs.go:277] 0 containers: []
	W0330 09:37:39.928755   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:39.928762   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:39.928770   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:39.968562   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:39.968580   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:39.981338   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:39.981351   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:40.041052   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:40.041064   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:40.041071   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:40.063658   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:40.063674   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:42.110630   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046942011s)
	I0330 09:37:44.611498   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:44.751682   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:44.775133   44075 logs.go:277] 0 containers: []
	W0330 09:37:44.775147   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:44.775218   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:44.795474   44075 logs.go:277] 0 containers: []
	W0330 09:37:44.795488   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:44.795557   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:44.815612   44075 logs.go:277] 0 containers: []
	W0330 09:37:44.815624   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:44.815695   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:44.839106   44075 logs.go:277] 0 containers: []
	W0330 09:37:44.839124   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:44.839213   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:44.869704   44075 logs.go:277] 0 containers: []
	W0330 09:37:44.869722   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:44.869827   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:44.890819   44075 logs.go:277] 0 containers: []
	W0330 09:37:44.890867   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:44.890951   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:44.911619   44075 logs.go:277] 0 containers: []
	W0330 09:37:44.911633   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:44.911714   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:44.932495   44075 logs.go:277] 0 containers: []
	W0330 09:37:44.932511   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:44.932525   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:44.932534   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:46.983347   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050796338s)
	I0330 09:37:46.983451   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:46.983459   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:47.021623   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:47.021637   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:47.034248   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:47.034265   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:47.102464   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:47.102482   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:47.102492   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:49.628826   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:49.751537   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:49.775503   44075 logs.go:277] 0 containers: []
	W0330 09:37:49.775518   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:49.775589   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:49.798283   44075 logs.go:277] 0 containers: []
	W0330 09:37:49.798296   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:49.798367   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:49.818653   44075 logs.go:277] 0 containers: []
	W0330 09:37:49.818666   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:49.818735   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:49.841597   44075 logs.go:277] 0 containers: []
	W0330 09:37:49.841610   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:49.841688   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:49.860704   44075 logs.go:277] 0 containers: []
	W0330 09:37:49.860717   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:49.860785   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:49.881389   44075 logs.go:277] 0 containers: []
	W0330 09:37:49.881402   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:49.881469   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:49.900482   44075 logs.go:277] 0 containers: []
	W0330 09:37:49.900494   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:49.900565   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:49.920608   44075 logs.go:277] 0 containers: []
	W0330 09:37:49.920622   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:49.920630   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:49.920638   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:49.980486   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:49.980499   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:49.980507   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:50.002994   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:50.003012   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:52.049930   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046902729s)
	I0330 09:37:52.050038   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:52.050046   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:52.087095   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:52.087110   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:54.600279   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:54.752334   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:54.774219   44075 logs.go:277] 0 containers: []
	W0330 09:37:54.774233   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:54.774300   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:54.792989   44075 logs.go:277] 0 containers: []
	W0330 09:37:54.793002   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:54.793075   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:54.812242   44075 logs.go:277] 0 containers: []
	W0330 09:37:54.812255   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:54.812322   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:54.831668   44075 logs.go:277] 0 containers: []
	W0330 09:37:54.831679   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:54.831730   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:54.852282   44075 logs.go:277] 0 containers: []
	W0330 09:37:54.852297   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:54.852369   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:54.871950   44075 logs.go:277] 0 containers: []
	W0330 09:37:54.871963   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:54.872034   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:54.891587   44075 logs.go:277] 0 containers: []
	W0330 09:37:54.891605   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:54.891677   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:54.911920   44075 logs.go:277] 0 containers: []
	W0330 09:37:54.911933   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:54.911940   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:54.911947   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:54.950401   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:54.950416   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:54.962885   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:54.962899   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:37:55.017689   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:37:55.017700   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:37:55.017712   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:37:55.039190   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:37:55.039203   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:37:57.084534   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045315699s)
	I0330 09:37:59.586606   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:37:59.753499   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:37:59.774417   44075 logs.go:277] 0 containers: []
	W0330 09:37:59.774431   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:37:59.774500   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:37:59.794146   44075 logs.go:277] 0 containers: []
	W0330 09:37:59.794159   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:37:59.794228   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:37:59.814222   44075 logs.go:277] 0 containers: []
	W0330 09:37:59.814235   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:37:59.814312   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:37:59.835095   44075 logs.go:277] 0 containers: []
	W0330 09:37:59.835109   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:37:59.835187   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:37:59.855408   44075 logs.go:277] 0 containers: []
	W0330 09:37:59.855421   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:37:59.855492   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:37:59.875037   44075 logs.go:277] 0 containers: []
	W0330 09:37:59.875054   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:37:59.875133   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:37:59.922637   44075 logs.go:277] 0 containers: []
	W0330 09:37:59.922651   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:37:59.922727   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:37:59.943732   44075 logs.go:277] 0 containers: []
	W0330 09:37:59.943745   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:37:59.943753   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:37:59.943760   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:37:59.980904   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:37:59.980917   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:37:59.993988   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:37:59.994003   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:00.050424   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:00.050435   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:00.050442   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:00.071489   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:00.071504   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:02.117425   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045906226s)
	I0330 09:38:04.617728   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:04.753432   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:04.774870   44075 logs.go:277] 0 containers: []
	W0330 09:38:04.774883   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:04.774951   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:04.794440   44075 logs.go:277] 0 containers: []
	W0330 09:38:04.794453   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:04.794523   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:04.814094   44075 logs.go:277] 0 containers: []
	W0330 09:38:04.814107   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:04.814176   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:04.834698   44075 logs.go:277] 0 containers: []
	W0330 09:38:04.834712   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:04.834783   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:04.854776   44075 logs.go:277] 0 containers: []
	W0330 09:38:04.854790   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:04.854850   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:04.874095   44075 logs.go:277] 0 containers: []
	W0330 09:38:04.874110   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:04.874180   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:04.893407   44075 logs.go:277] 0 containers: []
	W0330 09:38:04.893422   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:04.893494   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:04.913468   44075 logs.go:277] 0 containers: []
	W0330 09:38:04.913481   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:04.913489   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:04.913496   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:04.934401   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:04.934415   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:06.980802   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046372862s)
	I0330 09:38:06.980907   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:06.980915   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:07.018842   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:07.018860   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:07.031294   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:07.031307   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:07.086328   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:09.586785   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:09.752693   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:09.774168   44075 logs.go:277] 0 containers: []
	W0330 09:38:09.774182   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:09.774253   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:09.794505   44075 logs.go:277] 0 containers: []
	W0330 09:38:09.794518   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:09.794599   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:09.814412   44075 logs.go:277] 0 containers: []
	W0330 09:38:09.814425   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:09.814495   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:09.835960   44075 logs.go:277] 0 containers: []
	W0330 09:38:09.835973   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:09.836041   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:09.855569   44075 logs.go:277] 0 containers: []
	W0330 09:38:09.855590   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:09.855664   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:09.875930   44075 logs.go:277] 0 containers: []
	W0330 09:38:09.875943   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:09.876012   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:09.896684   44075 logs.go:277] 0 containers: []
	W0330 09:38:09.896697   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:09.896766   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:09.916434   44075 logs.go:277] 0 containers: []
	W0330 09:38:09.916447   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:09.916454   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:09.916460   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:09.954497   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:09.954529   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:09.967494   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:09.967508   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:10.022685   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:10.022701   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:10.022709   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:10.044390   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:10.044403   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:12.089485   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045066083s)
	I0330 09:38:14.591417   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:14.752266   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:14.773247   44075 logs.go:277] 0 containers: []
	W0330 09:38:14.773260   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:14.773331   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:14.792304   44075 logs.go:277] 0 containers: []
	W0330 09:38:14.792317   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:14.792383   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:14.813832   44075 logs.go:277] 0 containers: []
	W0330 09:38:14.813844   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:14.813913   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:14.834363   44075 logs.go:277] 0 containers: []
	W0330 09:38:14.834376   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:14.834448   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:14.855044   44075 logs.go:277] 0 containers: []
	W0330 09:38:14.855057   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:14.855126   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:14.875056   44075 logs.go:277] 0 containers: []
	W0330 09:38:14.875072   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:14.875145   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:14.925190   44075 logs.go:277] 0 containers: []
	W0330 09:38:14.925203   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:14.925271   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:14.946275   44075 logs.go:277] 0 containers: []
	W0330 09:38:14.946288   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:14.946295   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:14.946302   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:15.001911   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:15.001923   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:15.001930   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:15.023881   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:15.023896   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:17.071049   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047139462s)
	I0330 09:38:17.071155   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:17.071163   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:17.109517   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:17.109534   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:19.624594   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:19.752789   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:19.774515   44075 logs.go:277] 0 containers: []
	W0330 09:38:19.774529   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:19.774596   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:19.794062   44075 logs.go:277] 0 containers: []
	W0330 09:38:19.794074   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:19.794141   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:19.812913   44075 logs.go:277] 0 containers: []
	W0330 09:38:19.812926   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:19.812995   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:19.832579   44075 logs.go:277] 0 containers: []
	W0330 09:38:19.832596   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:19.832668   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:19.852330   44075 logs.go:277] 0 containers: []
	W0330 09:38:19.852343   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:19.852411   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:19.871919   44075 logs.go:277] 0 containers: []
	W0330 09:38:19.871933   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:19.872001   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:19.891602   44075 logs.go:277] 0 containers: []
	W0330 09:38:19.891616   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:19.891683   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:19.910980   44075 logs.go:277] 0 containers: []
	W0330 09:38:19.910993   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:19.910999   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:19.911007   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:19.949471   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:19.949485   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:19.962049   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:19.962064   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:20.017112   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:20.017125   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:20.017134   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:20.038417   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:20.038432   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:22.083909   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045462805s)
	I0330 09:38:24.584866   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:24.752443   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:24.775470   44075 logs.go:277] 0 containers: []
	W0330 09:38:24.775483   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:24.775550   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:24.795424   44075 logs.go:277] 0 containers: []
	W0330 09:38:24.795437   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:24.795507   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:24.813905   44075 logs.go:277] 0 containers: []
	W0330 09:38:24.813919   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:24.813988   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:24.834550   44075 logs.go:277] 0 containers: []
	W0330 09:38:24.834562   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:24.834635   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:24.853835   44075 logs.go:277] 0 containers: []
	W0330 09:38:24.853848   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:24.853915   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:24.872415   44075 logs.go:277] 0 containers: []
	W0330 09:38:24.872427   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:24.872496   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:24.891472   44075 logs.go:277] 0 containers: []
	W0330 09:38:24.891485   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:24.891553   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:24.910288   44075 logs.go:277] 0 containers: []
	W0330 09:38:24.910300   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:24.910307   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:24.910314   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:24.932250   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:24.932265   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:26.987625   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055343053s)
	I0330 09:38:26.987767   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:26.987776   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:27.036865   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:27.036884   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:27.053150   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:27.053174   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:27.141256   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:29.642311   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:29.751865   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:29.772937   44075 logs.go:277] 0 containers: []
	W0330 09:38:29.772950   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:29.773018   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:29.793080   44075 logs.go:277] 0 containers: []
	W0330 09:38:29.793093   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:29.793159   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:29.812469   44075 logs.go:277] 0 containers: []
	W0330 09:38:29.812483   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:29.812551   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:29.832696   44075 logs.go:277] 0 containers: []
	W0330 09:38:29.832709   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:29.832780   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:29.853929   44075 logs.go:277] 0 containers: []
	W0330 09:38:29.853943   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:29.854013   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:29.874804   44075 logs.go:277] 0 containers: []
	W0330 09:38:29.874817   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:29.874888   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:29.927045   44075 logs.go:277] 0 containers: []
	W0330 09:38:29.927059   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:29.927126   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:29.947690   44075 logs.go:277] 0 containers: []
	W0330 09:38:29.947703   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:29.947709   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:29.947716   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:31.996634   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048903402s)
	I0330 09:38:31.996742   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:31.996750   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:32.034701   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:32.034714   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:32.048261   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:32.048275   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:32.105197   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:32.105209   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:32.105217   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:34.629124   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:34.752035   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:34.774316   44075 logs.go:277] 0 containers: []
	W0330 09:38:34.774330   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:34.774399   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:34.793905   44075 logs.go:277] 0 containers: []
	W0330 09:38:34.793918   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:34.793984   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:34.813535   44075 logs.go:277] 0 containers: []
	W0330 09:38:34.813548   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:34.813620   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:34.833052   44075 logs.go:277] 0 containers: []
	W0330 09:38:34.833066   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:34.833137   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:34.852397   44075 logs.go:277] 0 containers: []
	W0330 09:38:34.852410   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:34.852480   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:34.871684   44075 logs.go:277] 0 containers: []
	W0330 09:38:34.871697   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:34.871768   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:34.892144   44075 logs.go:277] 0 containers: []
	W0330 09:38:34.892156   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:34.892222   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:34.911261   44075 logs.go:277] 0 containers: []
	W0330 09:38:34.911272   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:34.911280   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:34.911288   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:34.966771   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:34.966782   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:34.966789   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:34.987584   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:34.987598   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:37.037735   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050122495s)
	I0330 09:38:37.037842   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:37.037849   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:37.075956   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:37.075986   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:39.589700   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:39.751658   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:39.774949   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.774962   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:39.775041   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:39.796671   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.796689   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:39.796772   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:39.818745   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.818759   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:39.818827   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:39.843711   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.843727   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:39.843802   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:39.866640   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.866654   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:39.866752   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:39.888898   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.888912   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:39.888986   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:39.911869   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.911885   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:39.911957   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:39.933650   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.933671   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:39.933680   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:39.933689   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:39.973820   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:39.973837   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:39.986759   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:39.986775   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:40.047855   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:40.047867   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:40.047874   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:40.070616   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:40.070631   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:42.117746   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047099299s)
	I0330 09:38:44.620042   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:44.753669   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:44.775118   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.775132   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:44.775202   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:44.795935   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.795948   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:44.796016   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:44.815455   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.815469   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:44.815543   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:44.835159   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.835173   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:44.835242   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:44.855484   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.855498   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:44.855577   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:44.875723   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.875735   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:44.875805   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:44.895146   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.895158   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:44.895229   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:44.938559   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.938572   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:44.938579   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:44.938586   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:46.984801   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046193204s)
	I0330 09:38:46.984906   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:46.984914   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:47.021900   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:47.021914   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:47.034181   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:47.034195   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:47.089524   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:47.089537   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:47.089544   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:49.613617   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:49.753668   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:49.775059   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.775072   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:49.775140   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:49.793872   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.793885   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:49.793953   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:49.812776   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.812789   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:49.812856   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:49.832541   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.832552   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:49.832630   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:49.852407   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.852420   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:49.852489   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:49.872725   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.872737   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:49.872804   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:49.892499   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.892511   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:49.892578   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:49.911649   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.911663   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:49.911670   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:49.911677   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:49.923970   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:49.923982   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:49.978920   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:49.978932   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:49.978940   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:49.999797   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:49.999811   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:52.044591   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044765072s)
	I0330 09:38:52.044706   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:52.044720   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:54.582925   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:54.752279   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:54.774410   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.774426   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:54.774496   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:54.795245   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.795259   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:54.795340   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:54.814973   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.814988   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:54.815058   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:54.837578   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.837596   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:54.837680   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:54.861763   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.861780   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:54.861897   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:54.887553   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.887566   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:54.887647   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:54.906493   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.906505   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:54.906572   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:54.927089   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.927108   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:54.927117   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:54.927127   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:54.974700   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:54.974721   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:54.987588   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:54.987611   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:55.046244   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:55.046257   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:55.046267   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:55.069304   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:55.069323   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:57.116164   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046827047s)
	I0330 09:38:59.616372   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:59.751631   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:59.772968   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.772980   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:59.773046   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:59.792454   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.792467   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:59.792538   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:59.812960   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.812973   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:59.813043   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:59.833683   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.833698   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:59.833779   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:59.854249   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.854262   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:59.854338   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:59.876060   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.876074   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:59.876159   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:59.928403   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.928417   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:59.928496   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:59.948661   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.948674   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:59.948681   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:59.948688   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:59.970582   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:59.970600   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:02.016327   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045711683s)
	I0330 09:39:02.016435   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:02.016442   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:02.053922   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:02.053937   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:02.066877   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:02.066893   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:02.124331   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:04.624592   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:04.752131   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:04.774972   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.774985   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:04.775054   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:04.795624   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.795641   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:04.795741   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:04.815258   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.815271   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:04.815343   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:04.835053   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.835067   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:04.835134   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:04.854082   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.854094   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:04.854159   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:04.874244   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.874257   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:04.874324   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:04.894448   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.894460   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:04.894528   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:04.914607   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.914621   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:04.914629   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:04.914637   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:04.952789   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:04.952807   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:04.965372   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:04.965385   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:05.033516   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:05.033537   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:05.033547   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:05.057949   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:05.057964   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:07.103333   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045354029s)
	I0330 09:39:09.603795   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:09.753164   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:09.774567   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.774580   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:09.774650   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:09.794227   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.794239   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:09.794309   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:09.813473   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.813486   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:09.813555   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:09.833447   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.833460   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:09.833529   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:09.853061   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.853074   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:09.853144   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:09.872976   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.872989   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:09.873059   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:09.892552   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.892565   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:09.892647   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:09.912233   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.912246   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:09.912253   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:09.912261   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:09.950094   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:09.950113   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:09.963501   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:09.963517   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:10.019047   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:10.019059   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:10.019066   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:10.040446   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:10.040462   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:12.089666   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049187269s)
	I0330 09:39:14.590198   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:14.751942   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:14.774009   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.774022   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:14.774090   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:14.794543   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.794555   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:14.794625   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:14.813863   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.813878   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:14.813949   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:14.835726   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.835740   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:14.835809   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:14.855619   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.855636   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:14.855717   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:14.876498   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.876511   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:14.876580   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:14.923277   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.923292   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:14.923362   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:14.943716   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.943729   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:14.943736   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:14.943744   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:14.965262   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:14.965276   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:17.010472   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045180704s)
	I0330 09:39:17.010577   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:17.010584   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:17.048611   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:17.048628   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:17.061779   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:17.061793   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:17.117469   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:19.617704   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:19.752449   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:19.774802   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.774815   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:19.774883   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:19.793918   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.793931   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:19.793999   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:19.813324   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.813337   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:19.813404   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:19.832073   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.832086   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:19.832154   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:19.851698   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.851711   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:19.851778   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:19.871267   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.871280   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:19.871348   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:19.891063   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.891075   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:19.891144   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:19.911720   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.911733   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:19.911740   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:19.911748   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:19.933072   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:19.933087   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:21.982053   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048950556s)
	I0330 09:39:21.982157   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:21.982165   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:22.019375   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:22.019388   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:22.031962   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:22.031977   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:22.087413   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:24.587734   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:24.752081   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:24.774883   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.774896   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:24.774965   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:24.794298   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.794310   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:24.794381   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:24.813586   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.813600   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:24.813673   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:24.833270   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.833283   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:24.833351   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:24.853481   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.853493   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:24.853562   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:24.874267   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.874280   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:24.874346   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:24.894082   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.894095   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:24.894164   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:24.913854   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.913866   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:24.913873   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:24.913880   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:24.953053   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:24.953067   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:24.965556   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:24.965570   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:25.021464   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:25.021475   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:25.021482   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:25.044710   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:25.044726   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:27.089609   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044867071s)
	I0330 09:39:29.590032   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:29.752870   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:29.776152   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.776165   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:29.776236   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:29.795591   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.795604   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:29.795671   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:29.814647   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.814661   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:29.814738   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:29.835033   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.835046   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:29.835117   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:29.856174   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.856187   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:29.856257   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:29.876707   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.876722   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:29.876800   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:29.927493   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.927510   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:29.927588   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:29.947729   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.947743   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:29.947750   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:29.947757   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:29.986316   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:29.986331   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:29.998985   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:29.998999   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:30.056880   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:30.056892   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:30.056899   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:30.078514   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:30.078528   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:32.121440   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.042897105s)
	I0330 09:39:34.623358   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:34.753781   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:34.776068   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.776082   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:34.776149   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:34.795314   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.795325   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:34.795393   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:34.814594   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.814608   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:34.814676   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:34.834644   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.834657   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:34.834730   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:34.853809   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.853822   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:34.853892   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:34.876480   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.876493   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:34.876562   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:34.896280   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.896293   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:34.896358   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:34.915957   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.915970   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:34.915977   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:34.915985   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:34.971036   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:34.971048   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:34.971056   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:34.991768   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:34.991784   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:37.038619   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046818278s)
	I0330 09:39:37.038748   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:37.038757   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:37.075929   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:37.075948   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:39.589055   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:39.598858   44075 kubeadm.go:637] restartCluster took 4m11.704240904s
	W0330 09:39:39.598927   44075 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0330 09:39:39.598942   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0330 09:39:40.013458   44075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:39:40.023505   44075 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:39:40.031781   44075 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:39:40.031840   44075 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:39:40.042087   44075 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:39:40.042127   44075 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:39:40.163033   44075 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 09:39:40.163115   44075 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 09:39:40.193522   44075 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0330 09:39:40.271323   44075 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:41:35.961076   44075 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 09:41:35.961172   44075 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0330 09:41:35.964956   44075 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0330 09:41:35.964995   44075 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:41:35.965046   44075 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:41:35.965114   44075 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:41:35.965200   44075 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:41:35.965290   44075 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:41:35.965361   44075 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:41:35.965399   44075 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0330 09:41:35.965454   44075 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:41:35.987968   44075 out.go:204]   - Generating certificates and keys ...
	I0330 09:41:35.988080   44075 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:41:35.988192   44075 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:41:35.988321   44075 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 09:41:35.988409   44075 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 09:41:35.988522   44075 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 09:41:35.988610   44075 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 09:41:35.988713   44075 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 09:41:35.988813   44075 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 09:41:35.988927   44075 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 09:41:35.989048   44075 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 09:41:35.989105   44075 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 09:41:35.989190   44075 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:41:35.989276   44075 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:41:35.989361   44075 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:41:35.989462   44075 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:41:35.989557   44075 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:41:35.989654   44075 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:41:36.029684   44075 out.go:204]   - Booting up control plane ...
	I0330 09:41:36.029817   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:41:36.029931   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:41:36.030027   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:41:36.030120   44075 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:41:36.030287   44075 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:41:36.030334   44075 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 09:41:36.030393   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.030660   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.030783   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.031001   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.031088   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.031295   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.031364   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.031534   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.031598   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.031711   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.031715   44075 kubeadm.go:322] 
	I0330 09:41:36.031743   44075 kubeadm.go:322] Unfortunately, an error has occurred:
	I0330 09:41:36.031808   44075 kubeadm.go:322] 	timed out waiting for the condition
	I0330 09:41:36.031819   44075 kubeadm.go:322] 
	I0330 09:41:36.031854   44075 kubeadm.go:322] This error is likely caused by:
	I0330 09:41:36.031894   44075 kubeadm.go:322] 	- The kubelet is not running
	I0330 09:41:36.032003   44075 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 09:41:36.032012   44075 kubeadm.go:322] 
	I0330 09:41:36.032088   44075 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 09:41:36.032114   44075 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0330 09:41:36.032143   44075 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0330 09:41:36.032149   44075 kubeadm.go:322] 
	I0330 09:41:36.032256   44075 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 09:41:36.032355   44075 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0330 09:41:36.032431   44075 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0330 09:41:36.032472   44075 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0330 09:41:36.032550   44075 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0330 09:41:36.032587   44075 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0330 09:41:36.032711   44075 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0330 09:41:36.032739   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0330 09:41:36.443725   44075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:41:36.454031   44075 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:41:36.454085   44075 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:41:36.461956   44075 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:41:36.461977   44075 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:41:36.510689   44075 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0330 09:41:36.510739   44075 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:41:36.682210   44075 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:41:36.682309   44075 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:41:36.682383   44075 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:41:36.840717   44075 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:41:36.841441   44075 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:41:36.848152   44075 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0330 09:41:36.918163   44075 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:41:36.939692   44075 out.go:204]   - Generating certificates and keys ...
	I0330 09:41:36.939778   44075 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:41:36.939831   44075 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:41:36.939902   44075 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 09:41:36.939966   44075 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 09:41:36.940046   44075 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 09:41:36.940107   44075 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 09:41:36.940166   44075 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 09:41:36.940233   44075 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 09:41:36.940328   44075 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 09:41:36.940397   44075 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 09:41:36.940428   44075 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 09:41:36.940468   44075 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:41:37.085927   44075 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:41:37.288789   44075 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:41:37.365910   44075 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:41:37.516951   44075 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:41:37.517546   44075 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:41:37.538396   44075 out.go:204]   - Booting up control plane ...
	I0330 09:41:37.538513   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:41:37.538598   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:41:37.538671   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:41:37.538758   44075 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:41:37.538956   44075 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:42:17.525758   44075 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 09:42:17.526476   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:42:17.526693   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:42:22.528035   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:42:22.528259   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:42:32.528965   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:42:32.529211   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:42:52.530066   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:42:52.530221   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:43:32.532857   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:43:32.533080   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:43:32.533094   44075 kubeadm.go:322] 
	I0330 09:43:32.533149   44075 kubeadm.go:322] Unfortunately, an error has occurred:
	I0330 09:43:32.533195   44075 kubeadm.go:322] 	timed out waiting for the condition
	I0330 09:43:32.533200   44075 kubeadm.go:322] 
	I0330 09:43:32.533244   44075 kubeadm.go:322] This error is likely caused by:
	I0330 09:43:32.533308   44075 kubeadm.go:322] 	- The kubelet is not running
	I0330 09:43:32.533433   44075 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 09:43:32.533451   44075 kubeadm.go:322] 
	I0330 09:43:32.533573   44075 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 09:43:32.533615   44075 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0330 09:43:32.533648   44075 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0330 09:43:32.533654   44075 kubeadm.go:322] 
	I0330 09:43:32.533784   44075 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 09:43:32.533898   44075 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0330 09:43:32.534007   44075 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0330 09:43:32.534097   44075 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0330 09:43:32.534209   44075 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0330 09:43:32.534252   44075 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0330 09:43:32.536939   44075 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 09:43:32.537013   44075 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 09:43:32.537114   44075 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0330 09:43:32.537190   44075 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:43:32.537266   44075 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 09:43:32.537330   44075 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0330 09:43:32.537347   44075 kubeadm.go:403] StartCluster complete in 8m4.672052921s
	I0330 09:43:32.537453   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:43:32.558006   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.558024   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:43:32.558099   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:43:32.578804   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.578817   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:43:32.578887   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:43:32.598344   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.598356   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:43:32.598426   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:43:32.618824   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.618837   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:43:32.618903   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:43:32.639169   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.639181   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:43:32.639249   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:43:32.660415   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.660429   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:43:32.660496   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:43:32.680087   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.680107   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:43:32.680176   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:43:32.701444   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.701457   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:43:32.701465   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:43:32.701473   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:43:34.744853   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043365791s)
	I0330 09:43:34.744974   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:43:34.744982   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:43:34.781964   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:43:34.781983   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:43:34.794825   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:43:34.794844   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:43:34.855199   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:43:34.855213   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:43:34.855220   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0330 09:43:34.876486   44075 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0330 09:43:34.876506   44075 out.go:239] * 
	* 
	W0330 09:43:34.876607   44075 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 09:43:34.876622   44075 out.go:239] * 
	* 
	W0330 09:43:34.877184   44075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0330 09:43:34.961936   44075 out.go:177] 
	W0330 09:43:35.004175   44075 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 09:43:35.004295   44075 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0330 09:43:35.004357   44075 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0330 09:43:35.026017   44075 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-331000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-331000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-331000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a",
	        "Created": "2023-03-30T16:29:25.810047609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:35:23.982718937Z",
	            "FinishedAt": "2023-03-30T16:35:20.868468275Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a-json.log",
	        "Name": "/old-k8s-version-331000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-331000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-331000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-331000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-331000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-331000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6790ea0f276be9c604217c0826bb2493527579753993635659c34a69f43b6b3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59138"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59139"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e6790ea0f276",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-331000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "521ae416df38",
	                        "old-k8s-version-331000"
	                    ],
	                    "NetworkID": "6fa3318f5503a289e050622f1a9f4b0bea1af814ad18d1b409b3b8c51f5769ef",
	                    "EndpointID": "d3e12c3eabea1d71c79fbe06fe901d0e28f2e0e0f2e8ff4418b5e3ffe4c96e09",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 2 (403.282997ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-331000 logs -n 25
E0330 09:43:38.065999   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-331000 logs -n 25: (3.91168985s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-378000 sudo                            | kubenet-378000         | jenkins | v1.29.0 | 30 Mar 23 09:30 PDT | 30 Mar 23 09:30 PDT |
	|         | containerd config dump                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-378000 sudo                            | kubenet-378000         | jenkins | v1.29.0 | 30 Mar 23 09:30 PDT |                     |
	|         | systemctl status crio --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-378000 sudo                            | kubenet-378000         | jenkins | v1.29.0 | 30 Mar 23 09:30 PDT | 30 Mar 23 09:30 PDT |
	|         | systemctl cat crio --no-pager                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-378000 sudo find                       | kubenet-378000         | jenkins | v1.29.0 | 30 Mar 23 09:30 PDT | 30 Mar 23 09:30 PDT |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-378000 sudo crio                       | kubenet-378000         | jenkins | v1.29.0 | 30 Mar 23 09:30 PDT | 30 Mar 23 09:30 PDT |
	|         | config                                            |                        |         |         |                     |                     |
	| delete  | -p kubenet-378000                                 | kubenet-378000         | jenkins | v1.29.0 | 30 Mar 23 09:30 PDT | 30 Mar 23 09:30 PDT |
	| start   | -p no-preload-578000                              | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:30 PDT | 30 Mar 23 09:31 PDT |
	|         | --memory=2200 --alsologtostderr                   |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.0-rc.0                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-578000        | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:31 PDT | 30 Mar 23 09:31 PDT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-578000                              | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:31 PDT | 30 Mar 23 09:31 PDT |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-578000             | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:31 PDT | 30 Mar 23 09:31 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-578000                              | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:31 PDT | 30 Mar 23 09:36 PDT |
	|         | --memory=2200 --alsologtostderr                   |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.0-rc.0                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-331000   | old-k8s-version-331000 | jenkins | v1.29.0 | 30 Mar 23 09:33 PDT |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-331000                         | old-k8s-version-331000 | jenkins | v1.29.0 | 30 Mar 23 09:35 PDT | 30 Mar 23 09:35 PDT |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-331000        | old-k8s-version-331000 | jenkins | v1.29.0 | 30 Mar 23 09:35 PDT | 30 Mar 23 09:35 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-331000                         | old-k8s-version-331000 | jenkins | v1.29.0 | 30 Mar 23 09:35 PDT |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-578000 sudo                         | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-578000                              | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-578000                              | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-578000                              | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	| delete  | -p no-preload-578000                              | no-preload-578000      | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	| start   | -p embed-certs-995000                             | embed-certs-995000     | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:38 PDT |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-995000       | embed-certs-995000     | jenkins | v1.29.0 | 30 Mar 23 09:38 PDT | 30 Mar 23 09:38 PDT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-995000                             | embed-certs-995000     | jenkins | v1.29.0 | 30 Mar 23 09:38 PDT | 30 Mar 23 09:38 PDT |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-995000            | embed-certs-995000     | jenkins | v1.29.0 | 30 Mar 23 09:38 PDT | 30 Mar 23 09:38 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-995000                             | embed-certs-995000     | jenkins | v1.29.0 | 30 Mar 23 09:38 PDT |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/30 09:38:38
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0330 09:38:38.040583   44633 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:38:38.040794   44633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:38:38.040799   44633 out.go:309] Setting ErrFile to fd 2...
	I0330 09:38:38.040803   44633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:38:38.040946   44633 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:38:38.042576   44633 out.go:303] Setting JSON to false
	I0330 09:38:38.063249   44633 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9486,"bootTime":1680184832,"procs":424,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 09:38:38.063336   44633 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 09:38:38.084409   44633 out.go:177] * [embed-certs-995000] minikube v1.29.0 on Darwin 13.3
	I0330 09:38:38.126590   44633 notify.go:220] Checking for updates...
	I0330 09:38:38.147421   44633 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 09:38:38.168692   44633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:38:38.189794   44633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 09:38:38.211756   44633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 09:38:38.232855   44633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 09:38:38.254768   44633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 09:38:38.276267   44633 config.go:182] Loaded profile config "embed-certs-995000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:38:38.276956   44633 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 09:38:38.343490   44633 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 09:38:38.343627   44633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:38:38.531791   44633 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-30 16:38:38.396800478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:38:38.553691   44633 out.go:177] * Using the docker driver based on existing profile
	I0330 09:38:38.575486   44633 start.go:295] selected driver: docker
	I0330 09:38:38.575509   44633 start.go:859] validating driver "docker" against &{Name:embed-certs-995000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:embed-certs-995000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:38:38.575630   44633 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 09:38:38.579810   44633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:38:38.781114   44633 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-30 16:38:38.632320432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:38:38.781296   44633 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0330 09:38:38.781316   44633 cni.go:84] Creating CNI manager for ""
	I0330 09:38:38.781330   44633 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:38:38.781341   44633 start_flags.go:319] config:
	{Name:embed-certs-995000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:embed-certs-995000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:38:38.803267   44633 out.go:177] * Starting control plane node embed-certs-995000 in cluster embed-certs-995000
	I0330 09:38:38.824749   44633 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 09:38:38.845669   44633 out.go:177] * Pulling base image ...
	I0330 09:38:38.887885   44633 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 09:38:38.887931   44633 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 09:38:38.887985   44633 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0330 09:38:38.888006   44633 cache.go:57] Caching tarball of preloaded images
	I0330 09:38:38.888177   44633 preload.go:174] Found /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0330 09:38:38.888194   44633 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.3 on docker
	I0330 09:38:38.888970   44633 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/config.json ...
	I0330 09:38:38.947910   44633 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 09:38:38.947929   44633 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 09:38:38.947949   44633 cache.go:193] Successfully downloaded all kic artifacts
	I0330 09:38:38.947998   44633 start.go:364] acquiring machines lock for embed-certs-995000: {Name:mkb5d3896cff0f81976d73a19a1873b7ea3031c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 09:38:38.948090   44633 start.go:368] acquired machines lock for "embed-certs-995000" in 74.121µs
	I0330 09:38:38.948115   44633 start.go:96] Skipping create...Using existing machine configuration
	I0330 09:38:38.948125   44633 fix.go:55] fixHost starting: 
	I0330 09:38:38.948363   44633 cli_runner.go:164] Run: docker container inspect embed-certs-995000 --format={{.State.Status}}
	I0330 09:38:39.008716   44633 fix.go:103] recreateIfNeeded on embed-certs-995000: state=Stopped err=<nil>
	W0330 09:38:39.008745   44633 fix.go:129] unexpected machine state, will restart: <nil>
	I0330 09:38:39.030643   44633 out.go:177] * Restarting existing docker container for "embed-certs-995000" ...
	I0330 09:38:39.589700   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:39.751658   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:39.774949   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.774962   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:39.775041   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:39.796671   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.796689   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:39.796772   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:39.818745   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.818759   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:39.818827   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:39.843711   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.843727   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:39.843802   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:39.866640   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.866654   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:39.866752   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:39.888898   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.888912   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:39.888986   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:39.911869   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.911885   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:39.911957   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:39.933650   44075 logs.go:277] 0 containers: []
	W0330 09:38:39.933671   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:39.933680   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:39.933689   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:39.973820   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:39.973837   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:39.986759   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:39.986775   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:40.047855   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:40.047867   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:40.047874   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:40.070616   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:40.070631   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:42.117746   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047099299s)
	I0330 09:38:39.073622   44633 cli_runner.go:164] Run: docker start embed-certs-995000
	I0330 09:38:39.421542   44633 cli_runner.go:164] Run: docker container inspect embed-certs-995000 --format={{.State.Status}}
	I0330 09:38:39.486180   44633 kic.go:426] container "embed-certs-995000" state is running.
	I0330 09:38:39.486780   44633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-995000
	I0330 09:38:39.557650   44633 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/config.json ...
	I0330 09:38:39.558069   44633 machine.go:88] provisioning docker machine ...
	I0330 09:38:39.558092   44633 ubuntu.go:169] provisioning hostname "embed-certs-995000"
	I0330 09:38:39.558167   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:39.634394   44633 main.go:141] libmachine: Using SSH client type: native
	I0330 09:38:39.634853   44633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59255 <nil> <nil>}
	I0330 09:38:39.634868   44633 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-995000 && echo "embed-certs-995000" | sudo tee /etc/hostname
	I0330 09:38:39.776110   44633 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-995000
	
	I0330 09:38:39.776224   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:39.845841   44633 main.go:141] libmachine: Using SSH client type: native
	I0330 09:38:39.846193   44633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59255 <nil> <nil>}
	I0330 09:38:39.846209   44633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-995000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-995000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-995000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 09:38:39.967107   44633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:38:39.967130   44633 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 09:38:39.967153   44633 ubuntu.go:177] setting up certificates
	I0330 09:38:39.967160   44633 provision.go:83] configureAuth start
	I0330 09:38:39.967238   44633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-995000
	I0330 09:38:40.031597   44633 provision.go:138] copyHostCerts
	I0330 09:38:40.031682   44633 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 09:38:40.031693   44633 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 09:38:40.031800   44633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 09:38:40.032017   44633 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 09:38:40.032024   44633 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 09:38:40.032085   44633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 09:38:40.032235   44633 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 09:38:40.032241   44633 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 09:38:40.032302   44633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 09:38:40.032427   44633 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.embed-certs-995000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-995000]
	I0330 09:38:40.303494   44633 provision.go:172] copyRemoteCerts
	I0330 09:38:40.303565   44633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 09:38:40.303615   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:40.365567   44633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59255 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/embed-certs-995000/id_rsa Username:docker}
	I0330 09:38:40.453746   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0330 09:38:40.471057   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0330 09:38:40.488429   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 09:38:40.506426   44633 provision.go:86] duration metric: configureAuth took 539.252084ms
	I0330 09:38:40.506440   44633 ubuntu.go:193] setting minikube options for container-runtime
	I0330 09:38:40.506606   44633 config.go:182] Loaded profile config "embed-certs-995000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:38:40.506667   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:40.567932   44633 main.go:141] libmachine: Using SSH client type: native
	I0330 09:38:40.568269   44633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59255 <nil> <nil>}
	I0330 09:38:40.568280   44633 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 09:38:40.690825   44633 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 09:38:40.690838   44633 ubuntu.go:71] root file system type: overlay
	I0330 09:38:40.690917   44633 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 09:38:40.690997   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:40.752508   44633 main.go:141] libmachine: Using SSH client type: native
	I0330 09:38:40.752853   44633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59255 <nil> <nil>}
	I0330 09:38:40.752903   44633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 09:38:40.880405   44633 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 09:38:40.880500   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:40.941347   44633 main.go:141] libmachine: Using SSH client type: native
	I0330 09:38:40.941684   44633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59255 <nil> <nil>}
	I0330 09:38:40.941699   44633 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 09:38:41.064920   44633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:38:41.064938   44633 machine.go:91] provisioned docker machine in 1.506857523s
	I0330 09:38:41.064958   44633 start.go:300] post-start starting for "embed-certs-995000" (driver="docker")
	I0330 09:38:41.064963   44633 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 09:38:41.065028   44633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 09:38:41.065079   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:41.127566   44633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59255 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/embed-certs-995000/id_rsa Username:docker}
	I0330 09:38:41.215716   44633 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 09:38:41.219477   44633 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 09:38:41.219494   44633 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 09:38:41.219501   44633 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 09:38:41.219506   44633 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 09:38:41.219514   44633 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 09:38:41.219600   44633 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 09:38:41.219774   44633 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 09:38:41.219935   44633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 09:38:41.227539   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:38:41.244993   44633 start.go:303] post-start completed in 180.026061ms
	I0330 09:38:41.245065   44633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:38:41.245129   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:41.305907   44633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59255 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/embed-certs-995000/id_rsa Username:docker}
	I0330 09:38:41.390060   44633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 09:38:41.394849   44633 fix.go:57] fixHost completed within 2.446722745s
	I0330 09:38:41.394861   44633 start.go:83] releasing machines lock for "embed-certs-995000", held for 2.446761107s
	I0330 09:38:41.394936   44633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-995000
	I0330 09:38:41.456173   44633 ssh_runner.go:195] Run: cat /version.json
	I0330 09:38:41.456174   44633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0330 09:38:41.456245   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:41.456280   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:41.521701   44633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59255 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/embed-certs-995000/id_rsa Username:docker}
	I0330 09:38:41.521769   44633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59255 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/embed-certs-995000/id_rsa Username:docker}
	I0330 09:38:41.604756   44633 ssh_runner.go:195] Run: systemctl --version
	I0330 09:38:41.656214   44633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0330 09:38:41.661686   44633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0330 09:38:41.677697   44633 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0330 09:38:41.677768   44633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0330 09:38:41.685919   44633 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0330 09:38:41.685934   44633 start.go:481] detecting cgroup driver to use...
	I0330 09:38:41.685945   44633 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:38:41.686015   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:38:41.699633   44633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0330 09:38:41.708682   44633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 09:38:41.717465   44633 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 09:38:41.717533   44633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 09:38:41.726329   44633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:38:41.735173   44633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 09:38:41.743857   44633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:38:41.752620   44633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 09:38:41.760386   44633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 09:38:41.769138   44633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 09:38:41.776645   44633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 09:38:41.783957   44633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:38:41.851717   44633 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 09:38:41.931056   44633 start.go:481] detecting cgroup driver to use...
	I0330 09:38:41.931074   44633 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:38:41.931144   44633 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 09:38:41.948307   44633 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 09:38:41.948371   44633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 09:38:41.961512   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:38:41.976853   44633 ssh_runner.go:195] Run: which cri-dockerd
	I0330 09:38:41.981165   44633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 09:38:41.990681   44633 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0330 09:38:42.021697   44633 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 09:38:42.132619   44633 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 09:38:42.196422   44633 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 09:38:42.196439   44633 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 09:38:42.231021   44633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:38:42.334755   44633 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 09:38:42.588470   44633 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:38:42.652104   44633 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0330 09:38:42.735804   44633 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:38:42.805961   44633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:38:42.874970   44633 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0330 09:38:42.887022   44633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:38:42.955620   44633 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0330 09:38:43.031538   44633 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0330 09:38:43.031657   44633 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0330 09:38:43.036375   44633 start.go:549] Will wait 60s for crictl version
	I0330 09:38:43.036440   44633 ssh_runner.go:195] Run: which crictl
	I0330 09:38:43.040402   44633 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0330 09:38:43.070836   44633 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0330 09:38:43.070918   44633 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:38:43.097605   44633 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:38:44.620042   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:44.753669   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:44.775118   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.775132   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:44.775202   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:44.795935   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.795948   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:44.796016   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:44.815455   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.815469   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:44.815543   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:44.835159   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.835173   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:44.835242   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:44.855484   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.855498   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:44.855577   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:44.875723   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.875735   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:44.875805   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:44.895146   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.895158   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:44.895229   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:44.938559   44075 logs.go:277] 0 containers: []
	W0330 09:38:44.938572   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:44.938579   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:44.938586   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:46.984801   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046193204s)
	I0330 09:38:46.984906   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:46.984914   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:47.021900   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:47.021914   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:47.034181   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:47.034195   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:47.089524   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:47.089537   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:47.089544   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:43.145433   44633 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 23.0.1 ...
	I0330 09:38:43.145612   44633 cli_runner.go:164] Run: docker exec -t embed-certs-995000 dig +short host.docker.internal
	I0330 09:38:43.271427   44633 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 09:38:43.271561   44633 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 09:38:43.275983   44633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:38:43.286189   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:43.349128   44633 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 09:38:43.349204   44633 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:38:43.370684   44633 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0330 09:38:43.370710   44633 docker.go:569] Images already preloaded, skipping extraction
	I0330 09:38:43.370800   44633 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:38:43.391398   44633 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0330 09:38:43.391417   44633 cache_images.go:84] Images are preloaded, skipping loading
	I0330 09:38:43.391505   44633 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 09:38:43.417290   44633 cni.go:84] Creating CNI manager for ""
	I0330 09:38:43.417309   44633 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:38:43.417325   44633 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0330 09:38:43.417340   44633 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-995000 NodeName:embed-certs-995000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 09:38:43.417454   44633 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-995000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 09:38:43.417529   44633 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-995000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:embed-certs-995000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0330 09:38:43.417608   44633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0330 09:38:43.425718   44633 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 09:38:43.425788   44633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 09:38:43.433479   44633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0330 09:38:43.447053   44633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0330 09:38:43.460512   44633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0330 09:38:43.473731   44633 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0330 09:38:43.477542   44633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:38:43.487379   44633 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000 for IP: 192.168.67.2
	I0330 09:38:43.487400   44633 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:38:43.487566   44633 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 09:38:43.487622   44633 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 09:38:43.487728   44633 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/client.key
	I0330 09:38:43.487813   44633 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/apiserver.key.c7fa3a9e
	I0330 09:38:43.487862   44633 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/proxy-client.key
	I0330 09:38:43.488064   44633 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 09:38:43.488101   44633 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 09:38:43.488112   44633 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 09:38:43.488148   44633 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 09:38:43.488187   44633 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 09:38:43.488217   44633 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 09:38:43.488285   44633 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:38:43.490020   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 09:38:43.507885   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0330 09:38:43.525323   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 09:38:43.543030   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/embed-certs-995000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0330 09:38:43.560769   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 09:38:43.578464   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 09:38:43.595983   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 09:38:43.613416   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 09:38:43.631092   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 09:38:43.648811   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 09:38:43.666455   44633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 09:38:43.684043   44633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 09:38:43.697305   44633 ssh_runner.go:195] Run: openssl version
	I0330 09:38:43.702839   44633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 09:38:43.711333   44633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:38:43.715434   44633 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:38:43.715479   44633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:38:43.721128   44633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 09:38:43.728882   44633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 09:38:43.737152   44633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 09:38:43.741291   44633 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 09:38:43.741335   44633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 09:38:43.746842   44633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 09:38:43.754716   44633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 09:38:43.762984   44633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 09:38:43.766903   44633 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 09:38:43.766944   44633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 09:38:43.772378   44633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 09:38:43.779911   44633 kubeadm.go:401] StartCluster: {Name:embed-certs-995000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:embed-certs-995000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:38:43.780029   44633 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:38:43.799863   44633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 09:38:43.807811   44633 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0330 09:38:43.807827   44633 kubeadm.go:633] restartCluster start
	I0330 09:38:43.807884   44633 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0330 09:38:43.815073   44633 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:43.815190   44633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-995000
	I0330 09:38:43.877730   44633 kubeconfig.go:135] verify returned: extract IP: "embed-certs-995000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:38:43.877896   44633 kubeconfig.go:146] "embed-certs-995000" context is missing from /Users/jenkins/minikube-integration/16199-24978/kubeconfig - will repair!
	I0330 09:38:43.878235   44633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:38:43.879637   44633 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0330 09:38:43.887736   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:43.887806   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:43.896482   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:44.396709   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:44.396908   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:44.408094   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:44.896659   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:44.896722   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:44.905557   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:45.398608   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:45.398790   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:45.410382   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:45.898655   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:45.898824   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:45.910181   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:46.396624   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:46.396696   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:46.406729   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:46.897637   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:46.897774   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:46.909010   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:47.397308   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:47.397495   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:47.407747   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:47.896818   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:47.896889   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:47.906542   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:49.613617   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:49.753668   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:49.775059   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.775072   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:49.775140   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:49.793872   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.793885   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:49.793953   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:49.812776   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.812789   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:49.812856   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:49.832541   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.832552   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:49.832630   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:49.852407   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.852420   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:49.852489   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:49.872725   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.872737   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:49.872804   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:49.892499   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.892511   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:49.892578   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:49.911649   44075 logs.go:277] 0 containers: []
	W0330 09:38:49.911663   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:49.911670   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:49.911677   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:49.923970   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:49.923982   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:49.978920   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:49.978932   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:49.978940   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:49.999797   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:49.999811   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:52.044591   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044765072s)
	I0330 09:38:52.044706   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:52.044720   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:48.398631   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:48.398796   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:48.410151   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:48.898422   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:48.898572   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:48.909865   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:49.396887   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:49.396973   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:49.406567   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:49.896615   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:49.896683   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:49.905798   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:50.397156   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:50.397345   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:50.408525   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:50.897981   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:50.898065   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:50.908186   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:51.398630   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:51.398808   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:51.410071   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:51.897127   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:51.897287   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:51.908648   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:52.396555   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:52.396638   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:52.406641   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:52.898618   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:52.898802   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:52.910085   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:54.582925   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:54.752279   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:54.774410   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.774426   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:54.774496   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:54.795245   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.795259   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:54.795340   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:54.814973   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.814988   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:54.815058   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:54.837578   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.837596   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:54.837680   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:54.861763   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.861780   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:54.861897   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:54.887553   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.887566   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:54.887647   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:54.906493   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.906505   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:54.906572   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:54.927089   44075 logs.go:277] 0 containers: []
	W0330 09:38:54.927108   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:54.927117   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:38:54.927127   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:38:54.974700   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:38:54.974721   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:38:54.987588   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:38:54.987611   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:38:55.046244   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:55.046257   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:55.046267   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:55.069304   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:55.069323   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:38:57.116164   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046827047s)
	I0330 09:38:53.397705   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:53.397864   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:53.409487   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:53.898664   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:53.898845   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:53.910155   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:53.910166   44633 api_server.go:165] Checking apiserver status ...
	I0330 09:38:53.910225   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:38:53.919120   44633 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:53.919138   44633 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0330 09:38:53.919157   44633 kubeadm.go:1120] stopping kube-system containers ...
	I0330 09:38:53.919228   44633 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:38:53.941170   44633 docker.go:465] Stopping containers: [dffd673c3f1f e63342172f1a 2d0e3a793c5b acd4ca38dbbb 93934aff4866 7e4da48c9886 9b78aa91aa83 78debcab8ca6 eb07751e8851 421d25b62616 eb8914cd2632 e2f48cc44070 f20fb5cc18ae 61aeb42d35cf 8be1602364df c2bf1aff3423]
	I0330 09:38:53.941256   44633 ssh_runner.go:195] Run: docker stop dffd673c3f1f e63342172f1a 2d0e3a793c5b acd4ca38dbbb 93934aff4866 7e4da48c9886 9b78aa91aa83 78debcab8ca6 eb07751e8851 421d25b62616 eb8914cd2632 e2f48cc44070 f20fb5cc18ae 61aeb42d35cf 8be1602364df c2bf1aff3423
	I0330 09:38:53.961874   44633 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0330 09:38:53.972627   44633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:38:53.980497   44633 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 30 16:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 30 16:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Mar 30 16:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 30 16:37 /etc/kubernetes/scheduler.conf
	
	I0330 09:38:53.980555   44633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0330 09:38:53.988375   44633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0330 09:38:53.996185   44633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0330 09:38:54.003540   44633 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:54.003589   44633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0330 09:38:54.010889   44633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0330 09:38:54.018333   44633 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:38:54.018385   44633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0330 09:38:54.025956   44633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:38:54.033796   44633 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0330 09:38:54.033808   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:38:54.087745   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:38:54.621275   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:38:54.760725   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:38:54.826010   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:38:54.933424   44633 api_server.go:51] waiting for apiserver process to appear ...
	I0330 09:38:54.933501   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:55.450068   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:55.947980   44633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:55.959477   44633 api_server.go:71] duration metric: took 1.026053222s to wait for apiserver process to appear ...
	I0330 09:38:55.959494   44633 api_server.go:87] waiting for apiserver healthz status ...
	I0330 09:38:55.959507   44633 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59259/healthz ...
	I0330 09:38:55.960694   44633 api_server.go:268] stopped: https://127.0.0.1:59259/healthz: Get "https://127.0.0.1:59259/healthz": EOF
	I0330 09:38:56.460967   44633 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59259/healthz ...
	I0330 09:38:58.273452   44633 api_server.go:278] https://127.0.0.1:59259/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0330 09:38:58.273475   44633 api_server.go:102] status: https://127.0.0.1:59259/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0330 09:38:58.460760   44633 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59259/healthz ...
	I0330 09:38:58.466467   44633 api_server.go:278] https://127.0.0.1:59259/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0330 09:38:58.466485   44633 api_server.go:102] status: https://127.0.0.1:59259/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0330 09:38:58.962186   44633 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59259/healthz ...
	I0330 09:38:58.969425   44633 api_server.go:278] https://127.0.0.1:59259/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0330 09:38:58.969445   44633 api_server.go:102] status: https://127.0.0.1:59259/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0330 09:38:59.460774   44633 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59259/healthz ...
	I0330 09:38:59.465904   44633 api_server.go:278] https://127.0.0.1:59259/healthz returned 200:
	ok
	I0330 09:38:59.472446   44633 api_server.go:140] control plane version: v1.26.3
	I0330 09:38:59.472456   44633 api_server.go:130] duration metric: took 3.512953293s to wait for apiserver health ...
	I0330 09:38:59.472463   44633 cni.go:84] Creating CNI manager for ""
	I0330 09:38:59.472472   44633 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:38:59.494146   44633 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0330 09:38:59.616372   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:38:59.751631   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:38:59.772968   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.772980   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:38:59.773046   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:38:59.792454   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.792467   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:38:59.792538   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:38:59.812960   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.812973   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:38:59.813043   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:38:59.833683   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.833698   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:38:59.833779   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:38:59.854249   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.854262   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:38:59.854338   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:38:59.876060   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.876074   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:38:59.876159   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:38:59.928403   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.928417   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:38:59.928496   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:38:59.948661   44075 logs.go:277] 0 containers: []
	W0330 09:38:59.948674   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:38:59.948681   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:38:59.948688   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:38:59.970582   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:38:59.970600   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:02.016327   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045711683s)
	I0330 09:39:02.016435   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:02.016442   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:02.053922   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:02.053937   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:02.066877   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:02.066893   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:02.124331   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:38:59.515998   44633 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0330 09:38:59.525933   44633 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0330 09:38:59.540068   44633 system_pods.go:43] waiting for kube-system pods to appear ...
	I0330 09:38:59.547702   44633 system_pods.go:59] 8 kube-system pods found
	I0330 09:38:59.547717   44633 system_pods.go:61] "coredns-787d4945fb-5qwts" [a5fcdc7d-bf57-437b-bafa-77caff1476f1] Running
	I0330 09:38:59.547722   44633 system_pods.go:61] "etcd-embed-certs-995000" [7b03ba3c-f49d-4dad-9c74-a9f59cb715fb] Running
	I0330 09:38:59.547725   44633 system_pods.go:61] "kube-apiserver-embed-certs-995000" [c3722a67-2ad4-4724-bde4-9adbb1992fb3] Running
	I0330 09:38:59.547729   44633 system_pods.go:61] "kube-controller-manager-embed-certs-995000" [11230dc8-4459-4e3a-8c91-7d057052c3e8] Running
	I0330 09:38:59.547732   44633 system_pods.go:61] "kube-proxy-9v4xh" [8c2e24bc-1004-4a17-b7b9-f7947e7706a1] Running
	I0330 09:38:59.547736   44633 system_pods.go:61] "kube-scheduler-embed-certs-995000" [1586870c-9760-49b7-b950-d70724fd39f8] Running
	I0330 09:38:59.547742   44633 system_pods.go:61] "metrics-server-7997d45854-7527p" [f2bc4e37-4a35-4254-9ad5-d796c98ff78b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0330 09:38:59.547749   44633 system_pods.go:61] "storage-provisioner" [6e09d537-9eff-4586-a45a-1fbca8805340] Running
	I0330 09:38:59.547754   44633 system_pods.go:74] duration metric: took 7.675859ms to wait for pod list to return data ...
	I0330 09:38:59.547759   44633 node_conditions.go:102] verifying NodePressure condition ...
	I0330 09:38:59.550751   44633 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0330 09:38:59.550764   44633 node_conditions.go:123] node cpu capacity is 6
	I0330 09:38:59.550774   44633 node_conditions.go:105] duration metric: took 3.011277ms to run NodePressure ...
	I0330 09:38:59.550785   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:38:59.692297   44633 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0330 09:38:59.696556   44633 retry.go:31] will retry after 198.845884ms: kubelet not initialised
	I0330 09:38:59.900780   44633 retry.go:31] will retry after 228.098244ms: kubelet not initialised
	I0330 09:39:00.133931   44633 retry.go:31] will retry after 546.839542ms: kubelet not initialised
	I0330 09:39:00.691692   44633 retry.go:31] will retry after 905.461795ms: kubelet not initialised
	I0330 09:39:01.605106   44633 kubeadm.go:784] kubelet initialised
	I0330 09:39:01.605119   44633 kubeadm.go:785] duration metric: took 1.912805282s waiting for restarted kubelet to initialise ...
	I0330 09:39:01.605126   44633 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0330 09:39:01.609470   44633 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-5qwts" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:01.614490   44633 pod_ready.go:92] pod "coredns-787d4945fb-5qwts" in "kube-system" namespace has status "Ready":"True"
	I0330 09:39:01.614498   44633 pod_ready.go:81] duration metric: took 5.018096ms waiting for pod "coredns-787d4945fb-5qwts" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:01.614503   44633 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-995000" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:04.624592   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:04.752131   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:04.774972   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.774985   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:04.775054   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:04.795624   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.795641   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:04.795741   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:04.815258   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.815271   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:04.815343   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:04.835053   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.835067   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:04.835134   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:04.854082   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.854094   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:04.854159   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:04.874244   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.874257   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:04.874324   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:04.894448   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.894460   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:04.894528   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:04.914607   44075 logs.go:277] 0 containers: []
	W0330 09:39:04.914621   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:04.914629   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:04.914637   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:04.952789   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:04.952807   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:04.965372   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:04.965385   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:05.033516   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:05.033537   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:05.033547   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:05.057949   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:05.057964   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:07.103333   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045354029s)
	I0330 09:39:03.625650   44633 pod_ready.go:102] pod "etcd-embed-certs-995000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:05.627132   44633 pod_ready.go:102] pod "etcd-embed-certs-995000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:09.603795   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:09.753164   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:09.774567   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.774580   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:09.774650   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:09.794227   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.794239   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:09.794309   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:09.813473   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.813486   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:09.813555   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:09.833447   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.833460   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:09.833529   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:09.853061   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.853074   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:09.853144   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:09.872976   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.872989   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:09.873059   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:09.892552   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.892565   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:09.892647   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:09.912233   44075 logs.go:277] 0 containers: []
	W0330 09:39:09.912246   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:09.912253   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:09.912261   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:09.950094   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:09.950113   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:09.963501   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:09.963517   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:10.019047   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:10.019059   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:10.019066   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:10.040446   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:10.040462   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:12.089666   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049187269s)
	I0330 09:39:08.126088   44633 pod_ready.go:102] pod "etcd-embed-certs-995000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:10.626133   44633 pod_ready.go:102] pod "etcd-embed-certs-995000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:14.590198   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:14.751942   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:14.774009   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.774022   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:14.774090   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:14.794543   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.794555   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:14.794625   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:14.813863   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.813878   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:14.813949   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:14.835726   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.835740   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:14.835809   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:14.855619   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.855636   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:14.855717   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:14.876498   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.876511   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:14.876580   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:14.923277   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.923292   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:14.923362   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:14.943716   44075 logs.go:277] 0 containers: []
	W0330 09:39:14.943729   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:14.943736   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:14.943744   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:14.965262   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:14.965276   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:17.010472   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045180704s)
	I0330 09:39:17.010577   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:17.010584   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:17.048611   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:17.048628   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:17.061779   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:17.061793   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:17.117469   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:13.127081   44633 pod_ready.go:102] pod "etcd-embed-certs-995000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:14.125337   44633 pod_ready.go:92] pod "etcd-embed-certs-995000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:39:14.125351   44633 pod_ready.go:81] duration metric: took 12.510825445s waiting for pod "etcd-embed-certs-995000" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.125358   44633 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-995000" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.130808   44633 pod_ready.go:92] pod "kube-apiserver-embed-certs-995000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:39:14.130818   44633 pod_ready.go:81] duration metric: took 5.454542ms waiting for pod "kube-apiserver-embed-certs-995000" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.130825   44633 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-995000" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.136094   44633 pod_ready.go:92] pod "kube-controller-manager-embed-certs-995000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:39:14.136105   44633 pod_ready.go:81] duration metric: took 5.274976ms waiting for pod "kube-controller-manager-embed-certs-995000" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.136111   44633 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9v4xh" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.141363   44633 pod_ready.go:92] pod "kube-proxy-9v4xh" in "kube-system" namespace has status "Ready":"True"
	I0330 09:39:14.141372   44633 pod_ready.go:81] duration metric: took 5.256463ms waiting for pod "kube-proxy-9v4xh" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.141378   44633 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-995000" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.146719   44633 pod_ready.go:92] pod "kube-scheduler-embed-certs-995000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:39:14.146727   44633 pod_ready.go:81] duration metric: took 5.345104ms waiting for pod "kube-scheduler-embed-certs-995000" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:14.146733   44633 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-7527p" in "kube-system" namespace to be "Ready" ...
	I0330 09:39:16.530710   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:19.617704   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:19.752449   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:19.774802   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.774815   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:19.774883   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:19.793918   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.793931   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:19.793999   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:19.813324   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.813337   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:19.813404   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:19.832073   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.832086   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:19.832154   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:19.851698   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.851711   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:19.851778   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:19.871267   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.871280   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:19.871348   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:19.891063   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.891075   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:19.891144   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:19.911720   44075 logs.go:277] 0 containers: []
	W0330 09:39:19.911733   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:19.911740   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:19.911748   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:19.933072   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:19.933087   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:21.982053   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048950556s)
	I0330 09:39:21.982157   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:21.982165   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:22.019375   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:22.019388   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:22.031962   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:22.031977   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:22.087413   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:18.533416   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:21.032888   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:23.032976   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:24.587734   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:24.752081   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:24.774883   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.774896   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:24.774965   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:24.794298   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.794310   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:24.794381   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:24.813586   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.813600   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:24.813673   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:24.833270   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.833283   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:24.833351   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:24.853481   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.853493   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:24.853562   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:24.874267   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.874280   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:24.874346   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:24.894082   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.894095   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:24.894164   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:24.913854   44075 logs.go:277] 0 containers: []
	W0330 09:39:24.913866   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:24.913873   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:24.913880   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:24.953053   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:24.953067   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:24.965556   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:24.965570   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:25.021464   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:25.021475   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:25.021482   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:25.044710   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:25.044726   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:27.089609   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044867071s)
	I0330 09:39:25.533103   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:27.533583   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:29.590032   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:29.752870   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:29.776152   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.776165   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:29.776236   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:29.795591   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.795604   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:29.795671   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:29.814647   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.814661   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:29.814738   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:29.835033   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.835046   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:29.835117   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:29.856174   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.856187   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:29.856257   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:29.876707   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.876722   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:29.876800   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:29.927493   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.927510   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:29.927588   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:29.947729   44075 logs.go:277] 0 containers: []
	W0330 09:39:29.947743   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:29.947750   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:29.947757   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:29.986316   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:29.986331   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:29.998985   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:29.998999   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:30.056880   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:30.056892   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:30.056899   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:30.078514   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:30.078528   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:32.121440   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.042897105s)
	I0330 09:39:30.031098   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:32.033675   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:34.623358   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:34.753781   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:39:34.776068   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.776082   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:39:34.776149   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:39:34.795314   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.795325   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:39:34.795393   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:39:34.814594   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.814608   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:39:34.814676   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:39:34.834644   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.834657   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:39:34.834730   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:39:34.853809   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.853822   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:39:34.853892   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:39:34.876480   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.876493   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:39:34.876562   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:39:34.896280   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.896293   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:39:34.896358   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:39:34.915957   44075 logs.go:277] 0 containers: []
	W0330 09:39:34.915970   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:39:34.915977   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:39:34.915985   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:39:34.971036   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:39:34.971048   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:39:34.971056   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0330 09:39:34.991768   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:39:34.991784   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:39:37.038619   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046818278s)
	I0330 09:39:37.038748   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:39:37.038757   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:39:37.075929   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:39:37.075948   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:39:34.534237   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:37.031749   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:39.589055   44075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:39:39.598858   44075 kubeadm.go:637] restartCluster took 4m11.704240904s
	W0330 09:39:39.598927   44075 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0330 09:39:39.598942   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0330 09:39:40.013458   44075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:39:40.023505   44075 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:39:40.031781   44075 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:39:40.031840   44075 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:39:40.042087   44075 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:39:40.042127   44075 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:39:40.163033   44075 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 09:39:40.163115   44075 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 09:39:40.193522   44075 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0330 09:39:40.271323   44075 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:39:39.530170   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:41.532622   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:44.030451   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:46.033006   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:48.033993   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:50.532221   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:52.534407   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:54.535261   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:57.032309   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:39:59.531965   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:01.534068   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:04.031125   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:06.032840   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:08.033846   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:10.532915   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:13.033285   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:15.533207   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:18.032446   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:20.534638   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:23.034535   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:25.534095   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:28.032848   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:30.533936   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:32.534453   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:35.033532   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:37.532702   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:39.533721   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:42.034528   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:44.533743   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:47.033077   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:49.034973   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:51.533681   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:54.030498   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:56.032411   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:40:58.531669   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:00.532754   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:03.032412   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:05.534451   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:08.033092   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:10.532411   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:13.031481   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:15.033268   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:17.034633   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:19.534357   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:22.032568   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:24.531377   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:27.033059   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:29.533118   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:32.034435   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:35.961076   44075 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 09:41:35.961172   44075 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0330 09:41:35.964956   44075 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0330 09:41:35.964995   44075 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:41:35.965046   44075 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:41:35.965114   44075 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:41:35.965200   44075 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:41:35.965290   44075 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:41:35.965361   44075 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:41:35.965399   44075 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0330 09:41:35.965454   44075 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:41:35.987968   44075 out.go:204]   - Generating certificates and keys ...
	I0330 09:41:35.988080   44075 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:41:35.988192   44075 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:41:35.988321   44075 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 09:41:35.988409   44075 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 09:41:35.988522   44075 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 09:41:35.988610   44075 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 09:41:35.988713   44075 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 09:41:35.988813   44075 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 09:41:35.988927   44075 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 09:41:35.989048   44075 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 09:41:35.989105   44075 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 09:41:35.989190   44075 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:41:35.989276   44075 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:41:35.989361   44075 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:41:35.989462   44075 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:41:35.989557   44075 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:41:35.989654   44075 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:41:36.029684   44075 out.go:204]   - Booting up control plane ...
	I0330 09:41:36.029817   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:41:36.029931   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:41:36.030027   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:41:36.030120   44075 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:41:36.030287   44075 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:41:36.030334   44075 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 09:41:36.030393   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.030660   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.030783   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.031001   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.031088   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.031295   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.031364   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.031534   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.031598   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:41:36.031711   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:41:36.031715   44075 kubeadm.go:322] 
	I0330 09:41:36.031743   44075 kubeadm.go:322] Unfortunately, an error has occurred:
	I0330 09:41:36.031808   44075 kubeadm.go:322] 	timed out waiting for the condition
	I0330 09:41:36.031819   44075 kubeadm.go:322] 
	I0330 09:41:36.031854   44075 kubeadm.go:322] This error is likely caused by:
	I0330 09:41:36.031894   44075 kubeadm.go:322] 	- The kubelet is not running
	I0330 09:41:36.032003   44075 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 09:41:36.032012   44075 kubeadm.go:322] 
	I0330 09:41:36.032088   44075 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 09:41:36.032114   44075 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0330 09:41:36.032143   44075 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0330 09:41:36.032149   44075 kubeadm.go:322] 
	I0330 09:41:36.032256   44075 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 09:41:36.032355   44075 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0330 09:41:36.032431   44075 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0330 09:41:36.032472   44075 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0330 09:41:36.032550   44075 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0330 09:41:36.032587   44075 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	W0330 09:41:36.032711   44075 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0330 09:41:36.032739   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0330 09:41:36.443725   44075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:41:36.454031   44075 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:41:36.454085   44075 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:41:36.461956   44075 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:41:36.461977   44075 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:41:36.510689   44075 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0330 09:41:36.510739   44075 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:41:36.682210   44075 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:41:36.682309   44075 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:41:36.682383   44075 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:41:36.840717   44075 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:41:36.841441   44075 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:41:36.848152   44075 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0330 09:41:36.918163   44075 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:41:36.939692   44075 out.go:204]   - Generating certificates and keys ...
	I0330 09:41:36.939778   44075 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:41:36.939831   44075 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:41:36.939902   44075 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 09:41:36.939966   44075 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 09:41:36.940046   44075 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 09:41:36.940107   44075 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 09:41:36.940166   44075 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 09:41:36.940233   44075 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 09:41:36.940328   44075 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 09:41:36.940397   44075 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 09:41:36.940428   44075 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 09:41:36.940468   44075 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:41:37.085927   44075 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:41:37.288789   44075 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:41:37.365910   44075 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:41:37.516951   44075 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:41:37.517546   44075 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:41:34.533603   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:37.031161   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:37.538396   44075 out.go:204]   - Booting up control plane ...
	I0330 09:41:37.538513   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:41:37.538598   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:41:37.538671   44075 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:41:37.538758   44075 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:41:37.538956   44075 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:41:39.032129   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:41.532094   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:43.532548   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:46.032808   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:48.532191   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:50.532325   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:53.031947   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:55.032417   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:57.532461   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:41:59.532800   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:02.032351   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:04.532407   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:06.532836   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:08.533019   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:11.032390   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:13.032912   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:15.532851   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:18.032292   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:17.525758   44075 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0330 09:42:17.526476   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:42:17.526693   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:42:20.032436   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:22.032904   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:22.528035   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:42:22.528259   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:42:24.531033   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:26.533399   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:29.032021   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:31.032829   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:33.032882   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:32.528965   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:42:32.529211   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:42:35.532433   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:37.532513   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:40.032622   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:42.530881   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:44.532528   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:47.033373   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:49.532029   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:51.532863   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:52.530066   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:42:52.530221   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:42:54.031189   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:56.032622   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:42:58.532607   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:43:00.533054   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:43:03.031456   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:43:05.032169   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:43:07.032959   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:43:09.532474   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:43:12.031720   44633 pod_ready.go:102] pod "metrics-server-7997d45854-7527p" in "kube-system" namespace has status "Ready":"False"
	I0330 09:43:14.525193   44633 pod_ready.go:81] duration metric: took 4m0.378110512s waiting for pod "metrics-server-7997d45854-7527p" in "kube-system" namespace to be "Ready" ...
	E0330 09:43:14.525221   44633 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7997d45854-7527p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0330 09:43:14.525240   44633 pod_ready.go:38] duration metric: took 4m12.919750996s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0330 09:43:14.525312   44633 kubeadm.go:637] restartCluster took 4m30.717050158s
	W0330 09:43:14.525427   44633 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0330 09:43:14.525468   44633 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0330 09:43:18.821308   44633 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.295818112s)
	I0330 09:43:18.821379   44633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:43:18.831369   44633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:43:18.839236   44633 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:43:18.839293   44633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:43:18.847177   44633 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:43:18.847205   44633 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:43:18.893967   44633 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
	I0330 09:43:18.894011   44633 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:43:19.004270   44633 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:43:19.004363   44633 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:43:19.004435   44633 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:43:19.135169   44633 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:43:19.157540   44633 out.go:204]   - Generating certificates and keys ...
	I0330 09:43:19.157600   44633 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:43:19.157678   44633 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:43:19.157760   44633 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 09:43:19.157894   44633 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 09:43:19.158028   44633 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 09:43:19.158131   44633 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 09:43:19.158290   44633 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 09:43:19.158366   44633 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 09:43:19.158434   44633 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 09:43:19.158502   44633 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 09:43:19.158540   44633 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 09:43:19.158630   44633 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:43:19.205699   44633 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:43:19.324763   44633 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:43:19.391592   44633 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:43:19.494158   44633 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:43:19.504978   44633 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:43:19.505605   44633 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:43:19.505638   44633 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0330 09:43:19.574246   44633 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:43:19.595658   44633 out.go:204]   - Booting up control plane ...
	I0330 09:43:19.595752   44633 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:43:19.595846   44633 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:43:19.595914   44633 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:43:19.595979   44633 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:43:19.596161   44633 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:43:25.085660   44633 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.506886 seconds
	I0330 09:43:25.085834   44633 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0330 09:43:25.095368   44633 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0330 09:43:25.612810   44633 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0330 09:43:25.613006   44633 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-995000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0330 09:43:26.120075   44633 kubeadm.go:322] [bootstrap-token] Using token: vjrm24.0tev3c6hu50perr5
	I0330 09:43:26.158174   44633 out.go:204]   - Configuring RBAC rules ...
	I0330 09:43:26.158280   44633 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0330 09:43:26.160552   44633 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0330 09:43:26.183497   44633 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0330 09:43:26.185666   44633 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0330 09:43:26.187725   44633 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0330 09:43:26.190394   44633 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0330 09:43:26.198378   44633 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0330 09:43:26.350172   44633 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0330 09:43:26.564310   44633 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0330 09:43:26.564726   44633 kubeadm.go:322] 
	I0330 09:43:26.564791   44633 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0330 09:43:26.564803   44633 kubeadm.go:322] 
	I0330 09:43:26.564872   44633 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0330 09:43:26.564880   44633 kubeadm.go:322] 
	I0330 09:43:26.564897   44633 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0330 09:43:26.564955   44633 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0330 09:43:26.565046   44633 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0330 09:43:26.565055   44633 kubeadm.go:322] 
	I0330 09:43:26.565102   44633 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0330 09:43:26.565107   44633 kubeadm.go:322] 
	I0330 09:43:26.565146   44633 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0330 09:43:26.565151   44633 kubeadm.go:322] 
	I0330 09:43:26.565192   44633 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0330 09:43:26.565256   44633 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0330 09:43:26.565307   44633 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0330 09:43:26.565315   44633 kubeadm.go:322] 
	I0330 09:43:26.565393   44633 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0330 09:43:26.565472   44633 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0330 09:43:26.565479   44633 kubeadm.go:322] 
	I0330 09:43:26.565577   44633 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vjrm24.0tev3c6hu50perr5 \
	I0330 09:43:26.565663   44633 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1e834fc5c7ed9a2912dba9ac16dbe1efd1198393393505f3962bef154a0134ee \
	I0330 09:43:26.565695   44633 kubeadm.go:322] 	--control-plane 
	I0330 09:43:26.565702   44633 kubeadm.go:322] 
	I0330 09:43:26.568552   44633 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0330 09:43:26.568573   44633 kubeadm.go:322] 
	I0330 09:43:26.568681   44633 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vjrm24.0tev3c6hu50perr5 \
	I0330 09:43:26.568787   44633 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1e834fc5c7ed9a2912dba9ac16dbe1efd1198393393505f3962bef154a0134ee 
	I0330 09:43:26.569563   44633 kubeadm.go:322] W0330 16:43:18.888892    9173 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0330 09:43:26.569681   44633 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0330 09:43:26.569794   44633 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:43:26.569807   44633 cni.go:84] Creating CNI manager for ""
	I0330 09:43:26.569819   44633 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:43:26.592499   44633 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0330 09:43:26.635465   44633 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0330 09:43:26.645009   44633 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0330 09:43:26.659629   44633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0330 09:43:26.659721   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:26.659721   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=e1b28cf61afe27b0a5598da1ee43bf06463b8063 minikube.k8s.io/name=embed-certs-995000 minikube.k8s.io/updated_at=2023_03_30T09_43_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:26.842845   44633 ops.go:34] apiserver oom_adj: -16
	I0330 09:43:26.842860   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:27.411293   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:27.909432   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:28.409427   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:28.911273   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:29.409290   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:29.909226   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:30.410072   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:30.910476   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:31.409326   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:31.909533   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:32.409598   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:32.909293   44633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:43:32.532857   44075 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0330 09:43:32.533080   44075 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0330 09:43:32.533094   44075 kubeadm.go:322] 
	I0330 09:43:32.533149   44075 kubeadm.go:322] Unfortunately, an error has occurred:
	I0330 09:43:32.533195   44075 kubeadm.go:322] 	timed out waiting for the condition
	I0330 09:43:32.533200   44075 kubeadm.go:322] 
	I0330 09:43:32.533244   44075 kubeadm.go:322] This error is likely caused by:
	I0330 09:43:32.533308   44075 kubeadm.go:322] 	- The kubelet is not running
	I0330 09:43:32.533433   44075 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0330 09:43:32.533451   44075 kubeadm.go:322] 
	I0330 09:43:32.533573   44075 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0330 09:43:32.533615   44075 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0330 09:43:32.533648   44075 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0330 09:43:32.533654   44075 kubeadm.go:322] 
	I0330 09:43:32.533784   44075 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0330 09:43:32.533898   44075 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0330 09:43:32.534007   44075 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0330 09:43:32.534097   44075 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0330 09:43:32.534209   44075 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0330 09:43:32.534252   44075 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0330 09:43:32.536939   44075 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0330 09:43:32.537013   44075 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0330 09:43:32.537114   44075 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
	I0330 09:43:32.537190   44075 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:43:32.537266   44075 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0330 09:43:32.537330   44075 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0330 09:43:32.537347   44075 kubeadm.go:403] StartCluster complete in 8m4.672052921s
	I0330 09:43:32.537453   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0330 09:43:32.558006   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.558024   44075 logs.go:279] No container was found matching "kube-apiserver"
	I0330 09:43:32.558099   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0330 09:43:32.578804   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.578817   44075 logs.go:279] No container was found matching "etcd"
	I0330 09:43:32.578887   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0330 09:43:32.598344   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.598356   44075 logs.go:279] No container was found matching "coredns"
	I0330 09:43:32.598426   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0330 09:43:32.618824   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.618837   44075 logs.go:279] No container was found matching "kube-scheduler"
	I0330 09:43:32.618903   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0330 09:43:32.639169   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.639181   44075 logs.go:279] No container was found matching "kube-proxy"
	I0330 09:43:32.639249   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0330 09:43:32.660415   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.660429   44075 logs.go:279] No container was found matching "kube-controller-manager"
	I0330 09:43:32.660496   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0330 09:43:32.680087   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.680107   44075 logs.go:279] No container was found matching "kindnet"
	I0330 09:43:32.680176   44075 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0330 09:43:32.701444   44075 logs.go:277] 0 containers: []
	W0330 09:43:32.701457   44075 logs.go:279] No container was found matching "kubernetes-dashboard"
	I0330 09:43:32.701465   44075 logs.go:123] Gathering logs for container status ...
	I0330 09:43:32.701473   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0330 09:43:34.744853   44075 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.043365791s)
	I0330 09:43:34.744974   44075 logs.go:123] Gathering logs for kubelet ...
	I0330 09:43:34.744982   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0330 09:43:34.781964   44075 logs.go:123] Gathering logs for dmesg ...
	I0330 09:43:34.781983   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0330 09:43:34.794825   44075 logs.go:123] Gathering logs for describe nodes ...
	I0330 09:43:34.794844   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0330 09:43:34.855199   44075 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0330 09:43:34.855213   44075 logs.go:123] Gathering logs for Docker ...
	I0330 09:43:34.855220   44075 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0330 09:43:34.876486   44075 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0330 09:43:34.876506   44075 out.go:239] * 
	W0330 09:43:34.876607   44075 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 09:43:34.876622   44075 out.go:239] * 
	W0330 09:43:34.877184   44075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0330 09:43:34.961936   44075 out.go:177] 
	W0330 09:43:35.004175   44075 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0330 09:43:35.004295   44075 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0330 09:43:35.004357   44075 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0330 09:43:35.026017   44075 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-03-30 16:35:24 UTC, end at Thu 2023-03-30 16:43:36 UTC. --
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.963817851Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.964168371Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.964356862Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965081236Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965125069Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965140127Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965148593Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965194685Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965214733Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965244480Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965263887Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965280627Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965327664Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965654187Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965729174Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.966348901Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.974662847Z" level=info msg="Loading containers: start."
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.056519280Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.091396285Z" level=info msg="Loading containers: done."
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.099863638Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.099926662Z" level=info msg="Daemon has completed initialization"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.120983112Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 30 16:35:27 old-k8s-version-331000 systemd[1]: Started Docker Application Container Engine.
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.127843061Z" level=info msg="API listen on [::]:2376"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.130601270Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-03-30T16:43:38Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Mar30 16:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> kernel <==
	*  16:43:39 up  2:42,  0 users,  load average: 0.99, 1.01, 1.44
	Linux old-k8s-version-331000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-03-30 16:35:24 UTC, end at Thu 2023-03-30 16:43:39 UTC. --
	Mar 30 16:43:37 old-k8s-version-331000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14057]: I0330 16:43:38.161603   14057 server.go:410] Version: v1.16.0
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14057]: I0330 16:43:38.162065   14057 plugins.go:100] No cloud provider specified.
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14057]: I0330 16:43:38.162104   14057 server.go:773] Client rotation is on, will bootstrap in background
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14057]: I0330 16:43:38.163884   14057 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14057]: W0330 16:43:38.164688   14057 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14057]: W0330 16:43:38.164760   14057 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14057]: F0330 16:43:38.164797   14057 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14086]: I0330 16:43:38.898074   14086 server.go:410] Version: v1.16.0
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14086]: I0330 16:43:38.898397   14086 plugins.go:100] No cloud provider specified.
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14086]: I0330 16:43:38.898434   14086 server.go:773] Client rotation is on, will bootstrap in background
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14086]: I0330 16:43:38.900269   14086 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14086]: W0330 16:43:38.901071   14086 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14086]: W0330 16:43:38.901145   14086 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 30 16:43:38 old-k8s-version-331000 kubelet[14086]: F0330 16:43:38.901172   14086 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 30 16:43:38 old-k8s-version-331000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:43:38.830365   44988 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 2 (475.897407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-331000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (497.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:44:05.971511   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:44:48.284221   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:45:18.787791   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:45:36.533862   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:45:53.082179   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:46:12.273691   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:46:17.955618   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:46:30.617567   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:46:41.833544   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:46:45.897857   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:46:58.308027   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:47:15.885084   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:47:16.132535   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:47:42.925580   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:48:09.045074   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:48:18.294731   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:48:38.167781   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:48:39.029909   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:49:13.519874   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:49:41.366409   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:49:48.387347   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:50:01.218275   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:50:18.891630   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:50:53.186977   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:51:11.433677   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:51:12.377538   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:51:18.060906   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:51:30.723029   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:51:46.002071   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:52:15.989595   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:52:35.430693   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:52:43.033715   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 2 (409.383155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-331000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-331000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-331000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a",
	        "Created": "2023-03-30T16:29:25.810047609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:35:23.982718937Z",
	            "FinishedAt": "2023-03-30T16:35:20.868468275Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a-json.log",
	        "Name": "/old-k8s-version-331000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-331000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-331000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-331000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-331000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-331000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6790ea0f276be9c604217c0826bb2493527579753993635659c34a69f43b6b3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59138"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59139"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e6790ea0f276",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-331000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "521ae416df38",
	                        "old-k8s-version-331000"
	                    ],
	                    "NetworkID": "6fa3318f5503a289e050622f1a9f4b0bea1af814ad18d1b409b3b8c51f5769ef",
	                    "EndpointID": "d3e12c3eabea1d71c79fbe06fe901d0e28f2e0e0f2e8ff4418b5e3ffe4c96e09",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 2 (401.112013ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-331000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-331000 logs -n 25: (3.587334821s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-331000   | old-k8s-version-331000       | jenkins | v1.29.0 | 30 Mar 23 09:33 PDT |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-331000                         | old-k8s-version-331000       | jenkins | v1.29.0 | 30 Mar 23 09:35 PDT | 30 Mar 23 09:35 PDT |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-331000        | old-k8s-version-331000       | jenkins | v1.29.0 | 30 Mar 23 09:35 PDT | 30 Mar 23 09:35 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-331000                         | old-k8s-version-331000       | jenkins | v1.29.0 | 30 Mar 23 09:35 PDT |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-578000 sudo                         | no-preload-578000            | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-578000                              | no-preload-578000            | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-578000                              | no-preload-578000            | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-578000                              | no-preload-578000            | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	| delete  | -p no-preload-578000                              | no-preload-578000            | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:37 PDT |
	| start   | -p embed-certs-995000                             | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:37 PDT | 30 Mar 23 09:38 PDT |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-995000       | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:38 PDT | 30 Mar 23 09:38 PDT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-995000                             | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:38 PDT | 30 Mar 23 09:38 PDT |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-995000            | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:38 PDT | 30 Mar 23 09:38 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-995000                             | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:38 PDT | 30 Mar 23 09:43 PDT |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-995000 sudo                        | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-995000                             | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-995000                             | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-995000                             | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	| delete  | -p embed-certs-995000                             | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	| delete  | -p                                                | disable-driver-mounts-908000 | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | disable-driver-mounts-908000                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | default-k8s-diff-port-582000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:45 PDT | 30 Mar 23 09:45 PDT |
	|         | default-k8s-diff-port-582000                      |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:45 PDT | 30 Mar 23 09:45 PDT |
	|         | default-k8s-diff-port-582000                      |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-582000  | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:45 PDT | 30 Mar 23 09:45 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:45 PDT |                     |
	|         | default-k8s-diff-port-582000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/30 09:45:19
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0330 09:45:19.371755   45405 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:45:19.371924   45405 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:45:19.371930   45405 out.go:309] Setting ErrFile to fd 2...
	I0330 09:45:19.371934   45405 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:45:19.372059   45405 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:45:19.373589   45405 out.go:303] Setting JSON to false
	I0330 09:45:19.394400   45405 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9887,"bootTime":1680184832,"procs":428,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 09:45:19.394493   45405 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 09:45:19.417541   45405 out.go:177] * [default-k8s-diff-port-582000] minikube v1.29.0 on Darwin 13.3
	I0330 09:45:19.438551   45405 notify.go:220] Checking for updates...
	I0330 09:45:19.459687   45405 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 09:45:19.481545   45405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:45:19.502812   45405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 09:45:19.524744   45405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 09:45:19.546603   45405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 09:45:19.567774   45405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 09:45:19.590268   45405 config.go:182] Loaded profile config "default-k8s-diff-port-582000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:45:19.590924   45405 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 09:45:19.655823   45405 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 09:45:19.655954   45405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:45:19.845886   45405 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-30 16:45:19.709503377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:45:19.889619   45405 out.go:177] * Using the docker driver based on existing profile
	I0330 09:45:19.910548   45405 start.go:295] selected driver: docker
	I0330 09:45:19.910572   45405 start.go:859] validating driver "docker" against &{Name:default-k8s-diff-port-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:default-k8s-diff-port-582000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:45:19.910726   45405 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 09:45:19.915292   45405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:45:20.155780   45405 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-30 16:45:19.968223138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:45:20.155947   45405 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0330 09:45:20.155968   45405 cni.go:84] Creating CNI manager for ""
	I0330 09:45:20.156017   45405 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:45:20.156032   45405 start_flags.go:319] config:
	{Name:default-k8s-diff-port-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:default-k8s-diff-port-582000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:45:20.178205   45405 out.go:177] * Starting control plane node default-k8s-diff-port-582000 in cluster default-k8s-diff-port-582000
	I0330 09:45:20.201373   45405 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 09:45:20.222254   45405 out.go:177] * Pulling base image ...
	I0330 09:45:20.266386   45405 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 09:45:20.266409   45405 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 09:45:20.266472   45405 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0330 09:45:20.266493   45405 cache.go:57] Caching tarball of preloaded images
	I0330 09:45:20.266730   45405 preload.go:174] Found /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0330 09:45:20.266760   45405 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.3 on docker
	I0330 09:45:20.267874   45405 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/config.json ...
	I0330 09:45:20.327009   45405 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 09:45:20.327041   45405 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 09:45:20.327063   45405 cache.go:193] Successfully downloaded all kic artifacts
	I0330 09:45:20.327107   45405 start.go:364] acquiring machines lock for default-k8s-diff-port-582000: {Name:mka2bfb96ebe33354a152f13dd4f5a600861b918 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 09:45:20.327198   45405 start.go:368] acquired machines lock for "default-k8s-diff-port-582000" in 72.748µs
	I0330 09:45:20.327223   45405 start.go:96] Skipping create...Using existing machine configuration
	I0330 09:45:20.327234   45405 fix.go:55] fixHost starting: 
	I0330 09:45:20.327483   45405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-582000 --format={{.State.Status}}
	I0330 09:45:20.388862   45405 fix.go:103] recreateIfNeeded on default-k8s-diff-port-582000: state=Stopped err=<nil>
	W0330 09:45:20.388904   45405 fix.go:129] unexpected machine state, will restart: <nil>
	I0330 09:45:20.413252   45405 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-582000" ...
	I0330 09:45:20.435409   45405 cli_runner.go:164] Run: docker start default-k8s-diff-port-582000
	I0330 09:45:20.806776   45405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-582000 --format={{.State.Status}}
	I0330 09:45:20.878379   45405 kic.go:426] container "default-k8s-diff-port-582000" state is running.
	I0330 09:45:20.878991   45405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-582000
	I0330 09:45:20.946632   45405 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/config.json ...
	I0330 09:45:20.947124   45405 machine.go:88] provisioning docker machine ...
	I0330 09:45:20.947151   45405 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-582000"
	I0330 09:45:20.947238   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:21.021767   45405 main.go:141] libmachine: Using SSH client type: native
	I0330 09:45:21.022175   45405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59491 <nil> <nil>}
	I0330 09:45:21.022190   45405 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-582000 && echo "default-k8s-diff-port-582000" | sudo tee /etc/hostname
	I0330 09:45:21.153922   45405 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-582000
	
	I0330 09:45:21.154082   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:21.219470   45405 main.go:141] libmachine: Using SSH client type: native
	I0330 09:45:21.219821   45405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59491 <nil> <nil>}
	I0330 09:45:21.219837   45405 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-582000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-582000/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-582000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 09:45:21.335707   45405 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:45:21.335732   45405 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 09:45:21.335762   45405 ubuntu.go:177] setting up certificates
	I0330 09:45:21.335770   45405 provision.go:83] configureAuth start
	I0330 09:45:21.335851   45405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-582000
	I0330 09:45:21.397448   45405 provision.go:138] copyHostCerts
	I0330 09:45:21.397539   45405 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 09:45:21.397549   45405 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 09:45:21.397653   45405 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 09:45:21.397860   45405 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 09:45:21.397868   45405 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 09:45:21.397937   45405 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 09:45:21.398080   45405 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 09:45:21.398085   45405 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 09:45:21.398148   45405 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 09:45:21.398265   45405 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-582000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-582000]
	I0330 09:45:21.469685   45405 provision.go:172] copyRemoteCerts
	I0330 09:45:21.469747   45405 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 09:45:21.469812   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:21.531546   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:45:21.618789   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 09:45:21.637325   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0330 09:45:21.655584   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0330 09:45:21.673023   45405 provision.go:86] duration metric: configureAuth took 337.240953ms
	I0330 09:45:21.673037   45405 ubuntu.go:193] setting minikube options for container-runtime
	I0330 09:45:21.673217   45405 config.go:182] Loaded profile config "default-k8s-diff-port-582000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:45:21.673295   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:21.736529   45405 main.go:141] libmachine: Using SSH client type: native
	I0330 09:45:21.736863   45405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59491 <nil> <nil>}
	I0330 09:45:21.736874   45405 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 09:45:21.854914   45405 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 09:45:21.854928   45405 ubuntu.go:71] root file system type: overlay
	I0330 09:45:21.855013   45405 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 09:45:21.855099   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:21.915129   45405 main.go:141] libmachine: Using SSH client type: native
	I0330 09:45:21.915481   45405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59491 <nil> <nil>}
	I0330 09:45:21.915531   45405 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 09:45:22.044362   45405 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 09:45:22.044483   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:22.105483   45405 main.go:141] libmachine: Using SSH client type: native
	I0330 09:45:22.105826   45405 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 59491 <nil> <nil>}
	I0330 09:45:22.105841   45405 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 09:45:22.229787   45405 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:45:22.229803   45405 machine.go:91] provisioned docker machine in 1.282668382s
	I0330 09:45:22.229814   45405 start.go:300] post-start starting for "default-k8s-diff-port-582000" (driver="docker")
	I0330 09:45:22.229819   45405 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 09:45:22.229888   45405 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 09:45:22.229948   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:22.293001   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:45:22.379861   45405 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 09:45:22.383880   45405 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 09:45:22.383896   45405 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 09:45:22.383905   45405 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 09:45:22.383909   45405 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 09:45:22.383919   45405 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 09:45:22.384018   45405 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 09:45:22.384199   45405 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 09:45:22.384409   45405 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 09:45:22.392249   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:45:22.410287   45405 start.go:303] post-start completed in 180.462143ms
	I0330 09:45:22.410376   45405 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:45:22.410438   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:22.471355   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:45:22.553366   45405 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 09:45:22.558264   45405 fix.go:57] fixHost completed within 2.231017418s
	I0330 09:45:22.558281   45405 start.go:83] releasing machines lock for "default-k8s-diff-port-582000", held for 2.231072501s
	I0330 09:45:22.558382   45405 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-582000
	I0330 09:45:22.618957   45405 ssh_runner.go:195] Run: cat /version.json
	I0330 09:45:22.618986   45405 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0330 09:45:22.619038   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:22.619062   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:22.682501   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:45:22.682936   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:45:22.819598   45405 ssh_runner.go:195] Run: systemctl --version
	I0330 09:45:22.824675   45405 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0330 09:45:22.830053   45405 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0330 09:45:22.845821   45405 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0330 09:45:22.845889   45405 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0330 09:45:22.853883   45405 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0330 09:45:22.853895   45405 start.go:481] detecting cgroup driver to use...
	I0330 09:45:22.853906   45405 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:45:22.853980   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:45:22.867290   45405 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0330 09:45:22.875856   45405 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 09:45:22.884555   45405 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 09:45:22.884620   45405 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 09:45:22.893122   45405 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:45:22.901650   45405 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 09:45:22.910077   45405 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:45:22.918815   45405 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 09:45:22.926838   45405 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 09:45:22.935508   45405 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 09:45:22.942823   45405 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 09:45:22.949882   45405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:45:23.019137   45405 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 09:45:23.095605   45405 start.go:481] detecting cgroup driver to use...
	I0330 09:45:23.095648   45405 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:45:23.095724   45405 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 09:45:23.107414   45405 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 09:45:23.107515   45405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 09:45:23.118421   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:45:23.134273   45405 ssh_runner.go:195] Run: which cri-dockerd
	I0330 09:45:23.139114   45405 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 09:45:23.147516   45405 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0330 09:45:23.162050   45405 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 09:45:23.265010   45405 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 09:45:23.361496   45405 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 09:45:23.361516   45405 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 09:45:23.375029   45405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:45:23.460352   45405 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 09:45:23.803217   45405 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:45:23.877422   45405 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0330 09:45:23.940829   45405 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:45:24.010468   45405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:45:24.081815   45405 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0330 09:45:24.093840   45405 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:45:24.167239   45405 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0330 09:45:24.251179   45405 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0330 09:45:24.251288   45405 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0330 09:45:24.255750   45405 start.go:549] Will wait 60s for crictl version
	I0330 09:45:24.255809   45405 ssh_runner.go:195] Run: which crictl
	I0330 09:45:24.259905   45405 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0330 09:45:24.291093   45405 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0330 09:45:24.291187   45405 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:45:24.316490   45405 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:45:24.383156   45405 out.go:204] * Preparing Kubernetes v1.26.3 on Docker 23.0.1 ...
	I0330 09:45:24.383306   45405 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-582000 dig +short host.docker.internal
	I0330 09:45:24.521554   45405 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 09:45:24.521676   45405 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 09:45:24.526093   45405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:45:24.536250   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:24.598335   45405 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 09:45:24.598438   45405 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:45:24.621018   45405 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0330 09:45:24.621042   45405 docker.go:569] Images already preloaded, skipping extraction
	I0330 09:45:24.621124   45405 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:45:24.641505   45405 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.3
	registry.k8s.io/kube-scheduler:v1.26.3
	registry.k8s.io/kube-controller-manager:v1.26.3
	registry.k8s.io/kube-proxy:v1.26.3
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0330 09:45:24.641527   45405 cache_images.go:84] Images are preloaded, skipping loading
	I0330 09:45:24.641615   45405 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 09:45:24.669380   45405 cni.go:84] Creating CNI manager for ""
	I0330 09:45:24.669398   45405 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:45:24.669414   45405 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0330 09:45:24.669431   45405 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.26.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-582000 NodeName:default-k8s-diff-port-582000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 09:45:24.669545   45405 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-582000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 09:45:24.669615   45405 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-582000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.3 ClusterName:default-k8s-diff-port-582000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0330 09:45:24.669682   45405 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.3
	I0330 09:45:24.677747   45405 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 09:45:24.677815   45405 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 09:45:24.685331   45405 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0330 09:45:24.698550   45405 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0330 09:45:24.711785   45405 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0330 09:45:24.725006   45405 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0330 09:45:24.729161   45405 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:45:24.739175   45405 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000 for IP: 192.168.67.2
	I0330 09:45:24.739199   45405 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:45:24.739362   45405 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 09:45:24.739411   45405 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 09:45:24.739500   45405 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.key
	I0330 09:45:24.739565   45405 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/apiserver.key.c7fa3a9e
	I0330 09:45:24.739615   45405 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/proxy-client.key
	I0330 09:45:24.739815   45405 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 09:45:24.739857   45405 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 09:45:24.739869   45405 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 09:45:24.739899   45405 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 09:45:24.739928   45405 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 09:45:24.739964   45405 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 09:45:24.740041   45405 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:45:24.740618   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 09:45:24.758131   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0330 09:45:24.776667   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 09:45:24.794301   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0330 09:45:24.811683   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 09:45:24.829081   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 09:45:24.846793   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 09:45:24.864155   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 09:45:24.881489   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 09:45:24.898868   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 09:45:24.916491   45405 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 09:45:24.933978   45405 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 09:45:24.947020   45405 ssh_runner.go:195] Run: openssl version
	I0330 09:45:24.952788   45405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 09:45:24.961082   45405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 09:45:24.965034   45405 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 09:45:24.965077   45405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 09:45:24.970704   45405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 09:45:24.978572   45405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 09:45:24.986744   45405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:45:24.990973   45405 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:45:24.991028   45405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:45:24.996606   45405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 09:45:25.004087   45405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 09:45:25.012282   45405 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 09:45:25.016387   45405 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 09:45:25.016433   45405 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 09:45:25.021975   45405 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 09:45:25.029680   45405 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-582000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:default-k8s-diff-port-582000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:45:25.029786   45405 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:45:25.049313   45405 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 09:45:25.057478   45405 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0330 09:45:25.057495   45405 kubeadm.go:633] restartCluster start
	I0330 09:45:25.057545   45405 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0330 09:45:25.064700   45405 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:25.064770   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:45:25.127308   45405 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-582000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:45:25.127467   45405 kubeconfig.go:146] "default-k8s-diff-port-582000" context is missing from /Users/jenkins/minikube-integration/16199-24978/kubeconfig - will repair!
	I0330 09:45:25.128403   45405 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:45:25.130161   45405 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0330 09:45:25.138349   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:25.138410   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:25.147066   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:25.649256   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:25.649450   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:25.661050   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:26.147137   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:26.147245   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:26.157554   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:26.647496   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:26.647618   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:26.658532   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:27.149215   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:27.149384   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:27.160623   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:27.647435   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:27.647517   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:27.657291   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:28.149243   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:28.149421   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:28.160750   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:28.648568   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:28.648690   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:28.659877   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:29.147317   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:29.147416   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:29.157226   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:29.649189   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:29.649382   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:29.661241   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:30.148400   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:30.148597   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:30.159746   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:30.647421   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:30.647537   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:30.657615   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:31.149247   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:31.149437   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:31.161077   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:31.647489   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:31.647650   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:31.658626   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:32.147221   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:32.147319   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:32.157209   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:32.648590   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:32.648697   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:32.660289   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:33.149220   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:33.149408   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:33.160672   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:33.647355   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:33.647439   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:33.657938   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:34.149224   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:34.149375   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:34.160590   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:34.647496   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:34.647607   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:34.658930   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:35.147466   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:35.147538   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:35.157066   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:35.157077   45405 api_server.go:165] Checking apiserver status ...
	I0330 09:45:35.157135   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:45:35.165686   45405 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:35.165699   45405 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0330 09:45:35.165707   45405 kubeadm.go:1120] stopping kube-system containers ...
	I0330 09:45:35.165781   45405 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:45:35.186717   45405 docker.go:465] Stopping containers: [1a65b7933504 b76f6c9fda1c fb1eb69273d4 7837692e7a40 c697e3fc8481 2767033bb23f 888cffeb37fe 7a2250688e94 9012cf26457c 26336daf3aed 8c04ab8da777 cfce1757a3b0 19baae588a90 b99ca7a9f40d 0a60bb1d7f2c 16f5d877f668]
	I0330 09:45:35.186804   45405 ssh_runner.go:195] Run: docker stop 1a65b7933504 b76f6c9fda1c fb1eb69273d4 7837692e7a40 c697e3fc8481 2767033bb23f 888cffeb37fe 7a2250688e94 9012cf26457c 26336daf3aed 8c04ab8da777 cfce1757a3b0 19baae588a90 b99ca7a9f40d 0a60bb1d7f2c 16f5d877f668
	I0330 09:45:35.210264   45405 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0330 09:45:35.220951   45405 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:45:35.228712   45405 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Mar 30 16:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 30 16:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Mar 30 16:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Mar 30 16:44 /etc/kubernetes/scheduler.conf
	
	I0330 09:45:35.228768   45405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0330 09:45:35.236320   45405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0330 09:45:35.244011   45405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0330 09:45:35.251584   45405 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:35.251635   45405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0330 09:45:35.258953   45405 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0330 09:45:35.266461   45405 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:45:35.266514   45405 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0330 09:45:35.273706   45405 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:45:35.281291   45405 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0330 09:45:35.281304   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:45:35.336250   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:45:35.853755   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:45:35.989729   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:45:36.046553   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:45:36.148048   45405 api_server.go:51] waiting for apiserver process to appear ...
	I0330 09:45:36.148125   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:45:36.659092   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:45:37.158443   45405 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:45:37.231816   45405 api_server.go:71] duration metric: took 1.083763726s to wait for apiserver process to appear ...
	I0330 09:45:37.231831   45405 api_server.go:87] waiting for apiserver healthz status ...
	I0330 09:45:37.231844   45405 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59490/healthz ...
	I0330 09:45:37.233486   45405 api_server.go:268] stopped: https://127.0.0.1:59490/healthz: Get "https://127.0.0.1:59490/healthz": EOF
	I0330 09:45:37.734241   45405 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59490/healthz ...
	I0330 09:45:39.881344   45405 api_server.go:278] https://127.0.0.1:59490/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0330 09:45:39.881362   45405 api_server.go:102] status: https://127.0.0.1:59490/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0330 09:45:40.233730   45405 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59490/healthz ...
	I0330 09:45:40.239624   45405 api_server.go:278] https://127.0.0.1:59490/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0330 09:45:40.239641   45405 api_server.go:102] status: https://127.0.0.1:59490/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0330 09:45:40.734441   45405 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59490/healthz ...
	I0330 09:45:40.741478   45405 api_server.go:278] https://127.0.0.1:59490/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0330 09:45:40.741495   45405 api_server.go:102] status: https://127.0.0.1:59490/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0330 09:45:41.234436   45405 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:59490/healthz ...
	I0330 09:45:41.239920   45405 api_server.go:278] https://127.0.0.1:59490/healthz returned 200:
	ok
	I0330 09:45:41.246762   45405 api_server.go:140] control plane version: v1.26.3
	I0330 09:45:41.246773   45405 api_server.go:130] duration metric: took 4.014932091s to wait for apiserver health ...
	I0330 09:45:41.246781   45405 cni.go:84] Creating CNI manager for ""
	I0330 09:45:41.246791   45405 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:45:41.268414   45405 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0330 09:45:41.289142   45405 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0330 09:45:41.298500   45405 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0330 09:45:41.311610   45405 system_pods.go:43] waiting for kube-system pods to appear ...
	I0330 09:45:41.318771   45405 system_pods.go:59] 8 kube-system pods found
	I0330 09:45:41.318785   45405 system_pods.go:61] "coredns-787d4945fb-99r4k" [5be29ee2-07fb-43fb-bd6e-18f306dc8103] Running
	I0330 09:45:41.318791   45405 system_pods.go:61] "etcd-default-k8s-diff-port-582000" [58266f5b-ec1a-4c48-b645-dcb30c727773] Running
	I0330 09:45:41.318795   45405 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-582000" [55b582be-b791-4b9b-8056-dcfd06731ac9] Running
	I0330 09:45:41.318803   45405 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-582000" [700efdae-7027-484b-9571-f5a0e3173b72] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0330 09:45:41.318809   45405 system_pods.go:61] "kube-proxy-mbbbc" [fd4cfbd3-96dc-4699-98dd-1fdfbb3514ca] Running
	I0330 09:45:41.318813   45405 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-582000" [1df8199f-72a3-482f-8fd3-97161ceab0f9] Running
	I0330 09:45:41.318818   45405 system_pods.go:61] "metrics-server-7997d45854-m4lzk" [5e1d9527-af78-44b5-99c3-33051695f3e2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0330 09:45:41.318823   45405 system_pods.go:61] "storage-provisioner" [78244a96-a517-423a-bcdb-ed6153f53120] Running
	I0330 09:45:41.318827   45405 system_pods.go:74] duration metric: took 7.206207ms to wait for pod list to return data ...
	I0330 09:45:41.318832   45405 node_conditions.go:102] verifying NodePressure condition ...
	I0330 09:45:41.321900   45405 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0330 09:45:41.321912   45405 node_conditions.go:123] node cpu capacity is 6
	I0330 09:45:41.321922   45405 node_conditions.go:105] duration metric: took 3.08636ms to run NodePressure ...
	I0330 09:45:41.321931   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:45:41.549358   45405 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0330 09:45:41.553909   45405 kubeadm.go:784] kubelet initialised
	I0330 09:45:41.553943   45405 kubeadm.go:785] duration metric: took 4.569002ms waiting for restarted kubelet to initialise ...
	I0330 09:45:41.553950   45405 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0330 09:45:41.560000   45405 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-99r4k" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:41.565578   45405 pod_ready.go:92] pod "coredns-787d4945fb-99r4k" in "kube-system" namespace has status "Ready":"True"
	I0330 09:45:41.565592   45405 pod_ready.go:81] duration metric: took 5.574241ms waiting for pod "coredns-787d4945fb-99r4k" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:41.565600   45405 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:41.571240   45405 pod_ready.go:92] pod "etcd-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:45:41.571252   45405 pod_ready.go:81] duration metric: took 5.646913ms waiting for pod "etcd-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:41.571259   45405 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:41.576171   45405 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:45:41.576181   45405 pod_ready.go:81] duration metric: took 4.917618ms waiting for pod "kube-apiserver-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:41.576188   45405 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:43.727059   45405 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:45:46.229759   45405 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:45:48.724828   45405 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:45:50.725796   45405 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:45:53.227295   45405 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:45:53.723804   45405 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:45:53.723818   45405 pod_ready.go:81] duration metric: took 12.147608511s waiting for pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:53.723825   45405 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mbbbc" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:53.728531   45405 pod_ready.go:92] pod "kube-proxy-mbbbc" in "kube-system" namespace has status "Ready":"True"
	I0330 09:45:53.728539   45405 pod_ready.go:81] duration metric: took 4.710372ms waiting for pod "kube-proxy-mbbbc" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:53.728546   45405 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:55.741914   45405 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"False"
	I0330 09:45:56.239630   45405 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:45:56.239642   45405 pod_ready.go:81] duration metric: took 2.511087828s waiting for pod "kube-scheduler-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:56.239649   45405 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace to be "Ready" ...
	I0330 09:45:58.250102   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:00.253000   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:02.253363   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:04.750892   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:06.752498   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:08.754068   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:11.254113   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:13.752947   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:15.753376   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:18.250498   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:20.752806   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:23.252244   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:25.754400   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:28.250300   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:30.251148   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:32.252212   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:34.252312   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:36.752411   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:38.752529   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:41.250689   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:43.252745   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:45.753149   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:48.251907   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:50.252673   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:52.751195   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:54.753731   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:57.253022   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:46:59.752676   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:01.753652   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:04.251513   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:06.753932   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:09.253987   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:11.753390   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:14.252382   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:16.754491   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:19.251833   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:21.751888   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:23.752554   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:25.754144   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:28.252239   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:30.252594   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:32.751755   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:34.752551   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:36.754379   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:39.251788   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:41.254397   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:43.752838   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:45.753623   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:48.250485   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:50.254680   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:52.752290   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:55.353824   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:47:57.854511   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:00.351282   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:02.352327   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:04.852841   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:07.351477   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:09.854170   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:12.352574   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:14.853482   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:17.351657   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:19.854417   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:22.352931   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:24.853219   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:27.352985   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:29.853066   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:31.853284   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:33.853585   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:36.352779   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:38.353620   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:40.853590   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:43.351695   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:45.352895   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:47.354104   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:49.854323   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:51.855319   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:54.352567   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:56.353070   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:48:58.854410   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:01.355095   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:03.855761   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:06.351645   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:08.353534   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:10.853603   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:13.352150   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:15.856065   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:18.354692   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:20.354788   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:22.855728   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:24.856789   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:27.353610   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:29.855994   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:32.353498   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:34.356138   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:36.856432   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:39.354008   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:41.355263   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:43.855005   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:45.856523   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:48.354315   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:50.356033   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:52.855102   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:54.855236   45405 pod_ready.go:102] pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:49:56.347936   45405 pod_ready.go:81] duration metric: took 4m0.004997511s waiting for pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace to be "Ready" ...
	E0330 09:49:56.347958   45405 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7997d45854-m4lzk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0330 09:49:56.347980   45405 pod_ready.go:38] duration metric: took 4m14.690745833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0330 09:49:56.348027   45405 kubeadm.go:637] restartCluster took 4m31.187216599s
	W0330 09:49:56.348140   45405 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0330 09:49:56.348178   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0330 09:50:00.622415   45405 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (4.274122329s)
	I0330 09:50:00.622491   45405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:50:00.632522   45405 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:50:00.640334   45405 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0330 09:50:00.640383   45405 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:50:00.648058   45405 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0330 09:50:00.648088   45405 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0330 09:50:00.698525   45405 kubeadm.go:322] [init] Using Kubernetes version: v1.26.3
	I0330 09:50:00.698575   45405 kubeadm.go:322] [preflight] Running pre-flight checks
	I0330 09:50:00.807165   45405 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0330 09:50:00.807243   45405 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0330 09:50:00.807320   45405 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0330 09:50:00.938331   45405 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0330 09:50:00.980487   45405 out.go:204]   - Generating certificates and keys ...
	I0330 09:50:00.980547   45405 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0330 09:50:00.980597   45405 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0330 09:50:00.980663   45405 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0330 09:50:00.980712   45405 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0330 09:50:00.980769   45405 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0330 09:50:00.980890   45405 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0330 09:50:00.980986   45405 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0330 09:50:00.981100   45405 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0330 09:50:00.981196   45405 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0330 09:50:00.981295   45405 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0330 09:50:00.981380   45405 kubeadm.go:322] [certs] Using the existing "sa" key
	I0330 09:50:00.981493   45405 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0330 09:50:01.046537   45405 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0330 09:50:01.142862   45405 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0330 09:50:01.218385   45405 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0330 09:50:01.426070   45405 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0330 09:50:01.437004   45405 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0330 09:50:01.437610   45405 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0330 09:50:01.437669   45405 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0330 09:50:01.505522   45405 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0330 09:50:01.527026   45405 out.go:204]   - Booting up control plane ...
	I0330 09:50:01.527106   45405 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0330 09:50:01.527175   45405 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0330 09:50:01.527224   45405 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0330 09:50:01.527299   45405 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0330 09:50:01.527448   45405 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0330 09:50:06.513819   45405 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002426 seconds
	I0330 09:50:06.513967   45405 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0330 09:50:06.523588   45405 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0330 09:50:07.041789   45405 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0330 09:50:07.041944   45405 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-582000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0330 09:50:07.549461   45405 kubeadm.go:322] [bootstrap-token] Using token: m4nof2.szmwl9mewzj5dgex
	I0330 09:50:07.589299   45405 out.go:204]   - Configuring RBAC rules ...
	I0330 09:50:07.589504   45405 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0330 09:50:07.630493   45405 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0330 09:50:07.635640   45405 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0330 09:50:07.637817   45405 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0330 09:50:07.641232   45405 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0330 09:50:07.643469   45405 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0330 09:50:07.651905   45405 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0330 09:50:07.804992   45405 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0330 09:50:08.033775   45405 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0330 09:50:08.034468   45405 kubeadm.go:322] 
	I0330 09:50:08.034592   45405 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0330 09:50:08.034602   45405 kubeadm.go:322] 
	I0330 09:50:08.034680   45405 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0330 09:50:08.034690   45405 kubeadm.go:322] 
	I0330 09:50:08.034716   45405 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0330 09:50:08.034771   45405 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0330 09:50:08.034817   45405 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0330 09:50:08.034823   45405 kubeadm.go:322] 
	I0330 09:50:08.034874   45405 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0330 09:50:08.034884   45405 kubeadm.go:322] 
	I0330 09:50:08.034917   45405 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0330 09:50:08.034921   45405 kubeadm.go:322] 
	I0330 09:50:08.034960   45405 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0330 09:50:08.035022   45405 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0330 09:50:08.035072   45405 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0330 09:50:08.035080   45405 kubeadm.go:322] 
	I0330 09:50:08.035133   45405 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0330 09:50:08.035180   45405 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0330 09:50:08.035184   45405 kubeadm.go:322] 
	I0330 09:50:08.035253   45405 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token m4nof2.szmwl9mewzj5dgex \
	I0330 09:50:08.035361   45405 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1e834fc5c7ed9a2912dba9ac16dbe1efd1198393393505f3962bef154a0134ee \
	I0330 09:50:08.035383   45405 kubeadm.go:322] 	--control-plane 
	I0330 09:50:08.035389   45405 kubeadm.go:322] 
	I0330 09:50:08.035492   45405 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0330 09:50:08.035507   45405 kubeadm.go:322] 
	I0330 09:50:08.035609   45405 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token m4nof2.szmwl9mewzj5dgex \
	I0330 09:50:08.035797   45405 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1e834fc5c7ed9a2912dba9ac16dbe1efd1198393393505f3962bef154a0134ee 
	I0330 09:50:08.040404   45405 kubeadm.go:322] W0330 16:50:00.693331    9294 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0330 09:50:08.040571   45405 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0330 09:50:08.040703   45405 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0330 09:50:08.040717   45405 cni.go:84] Creating CNI manager for ""
	I0330 09:50:08.040730   45405 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:50:08.079831   45405 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0330 09:50:08.138948   45405 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0330 09:50:08.149187   45405 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0330 09:50:08.162467   45405 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0330 09:50:08.162542   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=e1b28cf61afe27b0a5598da1ee43bf06463b8063 minikube.k8s.io/name=default-k8s-diff-port-582000 minikube.k8s.io/updated_at=2023_03_30T09_50_08_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:08.162544   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:08.233423   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:08.251676   45405 ops.go:34] apiserver oom_adj: -16
	I0330 09:50:08.802572   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:09.302651   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:09.803312   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:10.302416   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:10.804329   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:11.302472   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:11.804385   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:12.304438   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:12.803187   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:13.302498   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:13.803382   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:14.302574   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:14.802941   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:15.302782   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:15.802715   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:16.304206   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:16.802887   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:17.302596   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:17.802796   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:18.303591   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:18.802779   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:19.303063   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:19.803312   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:20.302583   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:20.802638   45405 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0330 09:50:20.869771   45405 kubeadm.go:1073] duration metric: took 12.706995272s to wait for elevateKubeSystemPrivileges.
	I0330 09:50:20.869789   45405 kubeadm.go:403] StartCluster complete in 4m55.736266834s
	I0330 09:50:20.869814   45405 settings.go:142] acquiring lock: {Name:mkee06510b0682aea765fc9cbf62cdda0355bccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:50:20.869915   45405 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:50:20.870402   45405 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:50:20.870680   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0330 09:50:20.870691   45405 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0330 09:50:20.870764   45405 addons.go:66] Setting storage-provisioner=true in profile "default-k8s-diff-port-582000"
	I0330 09:50:20.870782   45405 addons.go:66] Setting metrics-server=true in profile "default-k8s-diff-port-582000"
	I0330 09:50:20.870784   45405 addons.go:228] Setting addon storage-provisioner=true in "default-k8s-diff-port-582000"
	I0330 09:50:20.870790   45405 addons.go:66] Setting dashboard=true in profile "default-k8s-diff-port-582000"
	I0330 09:50:20.870800   45405 addons.go:228] Setting addon metrics-server=true in "default-k8s-diff-port-582000"
	W0330 09:50:20.870805   45405 addons.go:237] addon storage-provisioner should already be in state true
	W0330 09:50:20.870807   45405 addons.go:237] addon metrics-server should already be in state true
	I0330 09:50:20.870811   45405 addons.go:228] Setting addon dashboard=true in "default-k8s-diff-port-582000"
	W0330 09:50:20.870818   45405 addons.go:237] addon dashboard should already be in state true
	I0330 09:50:20.870782   45405 addons.go:66] Setting default-storageclass=true in profile "default-k8s-diff-port-582000"
	I0330 09:50:20.870850   45405 host.go:66] Checking if "default-k8s-diff-port-582000" exists ...
	I0330 09:50:20.870857   45405 host.go:66] Checking if "default-k8s-diff-port-582000" exists ...
	I0330 09:50:20.870849   45405 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-582000"
	I0330 09:50:20.870867   45405 host.go:66] Checking if "default-k8s-diff-port-582000" exists ...
	I0330 09:50:20.870853   45405 config.go:182] Loaded profile config "default-k8s-diff-port-582000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:50:20.871178   45405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-582000 --format={{.State.Status}}
	I0330 09:50:20.871338   45405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-582000 --format={{.State.Status}}
	I0330 09:50:20.871357   45405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-582000 --format={{.State.Status}}
	I0330 09:50:20.871974   45405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-582000 --format={{.State.Status}}
	I0330 09:50:20.994609   45405 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0330 09:50:20.994631   45405 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0330 09:50:20.973152   45405 addons.go:228] Setting addon default-storageclass=true in "default-k8s-diff-port-582000"
	I0330 09:50:21.014521   45405 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W0330 09:50:21.014534   45405 addons.go:237] addon default-storageclass should already be in state true
	I0330 09:50:21.014586   45405 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0330 09:50:21.035853   45405 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:50:21.035855   45405 host.go:66] Checking if "default-k8s-diff-port-582000" exists ...
	I0330 09:50:21.072782   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0330 09:50:21.072807   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0330 09:50:21.072944   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:50:21.072943   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:50:21.073598   45405 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-582000 --format={{.State.Status}}
	I0330 09:50:21.079788   45405 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0330 09:50:21.094668   45405 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0330 09:50:21.131824   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0330 09:50:21.131892   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0330 09:50:21.132005   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:50:21.185822   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:50:21.186268   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:50:21.186329   45405 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0330 09:50:21.186339   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0330 09:50:21.186424   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:50:21.215927   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:50:21.260931   45405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59491 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/default-k8s-diff-port-582000/id_rsa Username:docker}
	I0330 09:50:21.429175   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0330 09:50:21.429191   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0330 09:50:21.436054   45405 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-582000" context rescaled to 1 replicas
	I0330 09:50:21.436094   45405 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 09:50:21.440475   45405 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0330 09:50:21.441885   45405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0330 09:50:21.442407   45405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:50:21.460440   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0330 09:50:21.448294   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0330 09:50:21.460338   45405 out.go:177] * Verifying Kubernetes components...
	I0330 09:50:21.460509   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0330 09:50:21.477000   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0330 09:50:21.502520   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0330 09:50:21.477267   45405 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0330 09:50:21.502563   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0330 09:50:21.502649   45405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:50:21.546584   45405 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0330 09:50:21.546614   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0330 09:50:21.554927   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0330 09:50:21.554940   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0330 09:50:21.627599   45405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0330 09:50:21.639940   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0330 09:50:21.639962   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0330 09:50:21.662995   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0330 09:50:21.663017   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0330 09:50:21.754662   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0330 09:50:21.754677   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0330 09:50:21.834248   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0330 09:50:21.834262   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0330 09:50:21.859991   45405 addons.go:420] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0330 09:50:21.860007   45405 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0330 09:50:21.948482   45405 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0330 09:50:22.446601   45405 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.351826958s)
	I0330 09:50:22.446625   45405 start.go:917] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0330 09:50:22.667005   45405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.206500415s)
	I0330 09:50:22.667014   45405 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.164316195s)
	I0330 09:50:22.667183   45405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-582000
	I0330 09:50:22.727839   45405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.100151985s)
	I0330 09:50:22.727878   45405 addons.go:464] Verifying addon metrics-server=true in "default-k8s-diff-port-582000"
	I0330 09:50:22.731444   45405 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-582000" to be "Ready" ...
	I0330 09:50:22.735783   45405 node_ready.go:49] node "default-k8s-diff-port-582000" has status "Ready":"True"
	I0330 09:50:22.735802   45405 node_ready.go:38] duration metric: took 4.338205ms waiting for node "default-k8s-diff-port-582000" to be "Ready" ...
	I0330 09:50:22.735816   45405 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0330 09:50:22.743141   45405 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-hlbnk" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:22.955565   45405 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.007027468s)
	I0330 09:50:22.980508   45405 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-582000 addons enable metrics-server	
	
	
	I0330 09:50:23.056030   45405 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0330 09:50:23.114853   45405 addons.go:499] enable addons completed in 2.244104508s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0330 09:50:24.757240   45405 pod_ready.go:102] pod "coredns-787d4945fb-hlbnk" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:26.756753   45405 pod_ready.go:92] pod "coredns-787d4945fb-hlbnk" in "kube-system" namespace has status "Ready":"True"
	I0330 09:50:26.756767   45405 pod_ready.go:81] duration metric: took 4.013519737s waiting for pod "coredns-787d4945fb-hlbnk" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:26.756774   45405 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-tdmhj" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.767457   45405 pod_ready.go:92] pod "coredns-787d4945fb-tdmhj" in "kube-system" namespace has status "Ready":"True"
	I0330 09:50:27.767472   45405 pod_ready.go:81] duration metric: took 1.010670147s waiting for pod "coredns-787d4945fb-tdmhj" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.767480   45405 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.773130   45405 pod_ready.go:92] pod "etcd-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:50:27.773141   45405 pod_ready.go:81] duration metric: took 5.656051ms waiting for pod "etcd-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.773148   45405 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.827115   45405 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:50:27.827130   45405 pod_ready.go:81] duration metric: took 53.975569ms waiting for pod "kube-apiserver-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.827139   45405 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.833390   45405 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:50:27.833401   45405 pod_ready.go:81] duration metric: took 6.256748ms waiting for pod "kube-controller-manager-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.833408   45405 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pbzk6" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.954122   45405 pod_ready.go:92] pod "kube-proxy-pbzk6" in "kube-system" namespace has status "Ready":"True"
	I0330 09:50:27.954134   45405 pod_ready.go:81] duration metric: took 120.719489ms waiting for pod "kube-proxy-pbzk6" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:27.954142   45405 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:28.354152   45405 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-582000" in "kube-system" namespace has status "Ready":"True"
	I0330 09:50:28.354164   45405 pod_ready.go:81] duration metric: took 400.008467ms waiting for pod "kube-scheduler-default-k8s-diff-port-582000" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:28.354173   45405 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace to be "Ready" ...
	I0330 09:50:30.760630   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:33.259759   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:35.260194   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:37.262365   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:39.760879   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:41.761863   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:43.763731   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:46.262361   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:48.760230   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:50.763722   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:53.261069   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:55.263424   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:57.760891   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:50:59.762108   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:01.763056   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:04.264024   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:06.762281   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:08.763309   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:11.261379   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:13.264757   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:15.762021   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:18.263519   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:20.764352   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:23.261340   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:25.264332   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:27.761826   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:29.763041   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:31.763224   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:34.264394   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:36.761353   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:38.764505   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:40.764625   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:43.262790   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:45.762156   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:47.763927   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:50.261914   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:52.263902   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:54.264372   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:56.764575   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:51:59.262816   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:01.263253   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:03.264945   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:05.763040   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:07.765003   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:10.264485   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:12.762984   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:14.765423   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:17.262960   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:19.264114   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:21.762873   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:24.264199   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:26.763955   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:28.764895   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:31.265491   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:33.762817   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:35.763719   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:37.766110   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:40.265963   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:42.763287   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:44.764604   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:47.263435   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:49.265953   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:51.763706   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:54.263546   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:56.263852   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:52:58.266367   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:53:00.764459   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:53:02.765313   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:53:05.263515   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	I0330 09:53:07.266610   45405 pod_ready.go:102] pod "metrics-server-7997d45854-xpzwl" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-03-30 16:35:24 UTC, end at Thu 2023-03-30 16:53:12 UTC. --
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.963817851Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.964168371Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.964356862Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965081236Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965125069Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965140127Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965148593Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965194685Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965214733Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965244480Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965263887Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965280627Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965327664Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965654187Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965729174Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.966348901Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.974662847Z" level=info msg="Loading containers: start."
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.056519280Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.091396285Z" level=info msg="Loading containers: done."
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.099863638Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.099926662Z" level=info msg="Daemon has completed initialization"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.120983112Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 30 16:35:27 old-k8s-version-331000 systemd[1]: Started Docker Application Container Engine.
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.127843061Z" level=info msg="API listen on [::]:2376"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.130601270Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-03-30T16:53:14Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Mar30 16:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> kernel <==
	*  16:53:14 up  2:52,  0 users,  load average: 0.70, 0.76, 1.07
	Linux old-k8s-version-331000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-03-30 16:35:24 UTC, end at Thu 2023-03-30 16:53:14 UTC. --
	Mar 30 16:53:12 old-k8s-version-331000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 30 16:53:13 old-k8s-version-331000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Mar 30 16:53:13 old-k8s-version-331000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 30 16:53:13 old-k8s-version-331000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 30 16:53:13 old-k8s-version-331000 kubelet[24252]: I0330 16:53:13.502019   24252 server.go:410] Version: v1.16.0
	Mar 30 16:53:13 old-k8s-version-331000 kubelet[24252]: I0330 16:53:13.502238   24252 plugins.go:100] No cloud provider specified.
	Mar 30 16:53:13 old-k8s-version-331000 kubelet[24252]: I0330 16:53:13.502251   24252 server.go:773] Client rotation is on, will bootstrap in background
	Mar 30 16:53:13 old-k8s-version-331000 kubelet[24252]: I0330 16:53:13.504141   24252 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 30 16:53:13 old-k8s-version-331000 kubelet[24252]: W0330 16:53:13.504812   24252 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 30 16:53:13 old-k8s-version-331000 kubelet[24252]: W0330 16:53:13.504879   24252 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 30 16:53:13 old-k8s-version-331000 kubelet[24252]: F0330 16:53:13.504904   24252 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 30 16:53:13 old-k8s-version-331000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 30 16:53:13 old-k8s-version-331000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 30 16:53:14 old-k8s-version-331000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Mar 30 16:53:14 old-k8s-version-331000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 30 16:53:14 old-k8s-version-331000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 30 16:53:14 old-k8s-version-331000 kubelet[24266]: I0330 16:53:14.247750   24266 server.go:410] Version: v1.16.0
	Mar 30 16:53:14 old-k8s-version-331000 kubelet[24266]: I0330 16:53:14.248122   24266 plugins.go:100] No cloud provider specified.
	Mar 30 16:53:14 old-k8s-version-331000 kubelet[24266]: I0330 16:53:14.248159   24266 server.go:773] Client rotation is on, will bootstrap in background
	Mar 30 16:53:14 old-k8s-version-331000 kubelet[24266]: I0330 16:53:14.250046   24266 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 30 16:53:14 old-k8s-version-331000 kubelet[24266]: W0330 16:53:14.252320   24266 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 30 16:53:14 old-k8s-version-331000 kubelet[24266]: W0330 16:53:14.252401   24266 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 30 16:53:14 old-k8s-version-331000 kubelet[24266]: F0330 16:53:14.252468   24266 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 30 16:53:14 old-k8s-version-331000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 30 16:53:14 old-k8s-version-331000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:53:14.388783   45922 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 2 (402.324619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-331000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0330 09:53:18.301516   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:53:38.174556   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:54:13.528148   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:54:48.393447   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:55:53.194059   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:56:12.383298   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:56:18.068717   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:56:30.731162   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:56:46.008969   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:57:15.996175   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:57:41.125548   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 09:57:43.039408   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:57:53.782376   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:58:18.308440   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:58:38.183265   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59139/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0330 09:59:13.535019   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 09:59:48.400854   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 09:59:57.078298   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
E0330 09:59:57.084035   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
E0330 09:59:57.096283   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
E0330 09:59:57.117736   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
E0330 09:59:57.158048   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
E0330 09:59:57.238431   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 09:59:57.400633   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
E0330 09:59:57.721823   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 09:59:58.362002   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 09:59:59.642747   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:00:02.204444   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:00:07.326773   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:00:17.567767   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:00:18.907180   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:00:38.049906   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:00:46.089974   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:00:53.202240   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:01:12.392173   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:01:18.073394   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:01:19.011754   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/default-k8s-diff-port-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:01:30.738363   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:01:46.015547   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:02:16.003113   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0330 10:02:16.655102   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 2 (410.523574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-331000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-331000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-331000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.836µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-331000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-331000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-331000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a",
	        "Created": "2023-03-30T16:29:25.810047609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 683182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-03-30T16:35:23.982718937Z",
	            "FinishedAt": "2023-03-30T16:35:20.868468275Z"
	        },
	        "Image": "sha256:9d2236b717ccec479afd77862e0eef2affb7c3e4fe7eecdc0546bff7b370db25",
	        "ResolvConfPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hostname",
	        "HostsPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/hosts",
	        "LogPath": "/var/lib/docker/containers/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a/521ae416df386b804a8c63fde7d0580ea83db7526870fcff106c7ff7ac19c55a-json.log",
	        "Name": "/old-k8s-version-331000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-331000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-331000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29-init/diff:/var/lib/docker/overlay2/9fbf3b44ff597f1722ec4c360ffdfcae800e09c24268f5f936302ffdfee42d32/diff:/var/lib/docker/overlay2/2fa58e45c276e0be77bbac970263b15f9ad2001b415cd256f2ba0fa98573e87f/diff:/var/lib/docker/overlay2/13592448013aa7997afb90ea8f709060aa47b73f59c20b387deeca946f5e5266/diff:/var/lib/docker/overlay2/9c9eab98a913029a905ee9043d4ea3ee591c2a3aeae75270696cf2f4b02139ff/diff:/var/lib/docker/overlay2/8369b3f99543b36671d6b1cad98dc7941e80cfbb758b02cee4ada4b05bcb4f0b/diff:/var/lib/docker/overlay2/c51ae82d21d70c3b9ce7715bc6a6a55190189c8010c12659f5e6b4fc071fcf51/diff:/var/lib/docker/overlay2/1cfa501ade9e79b858d1178588e001d5cdbedf2827ba12c92bc46c08ea1d1930/diff:/var/lib/docker/overlay2/73958304c7245dc492f266b09a7ed783314d048ca147fa168947fb98ba88124e/diff:/var/lib/docker/overlay2/a86028ac7e2c414af738e790ed658ffe4d751344c6dc27966e786ec7cf7703c1/diff:/var/lib/docker/overlay2/8421b4
006e78f7e0ae1d1064a5f50418040f650718ed63b5049cac7acda8429b/diff:/var/lib/docker/overlay2/f1aa735ff48a6070c6673d0397f3d7e592ed5d0dd6b2bf65bb9f45567d8e76b1/diff:/var/lib/docker/overlay2/7f60399d9dd327abdaefda924e875ea54223eb844b6c5c6ace4792e3fc0498e1/diff:/var/lib/docker/overlay2/e569b0d3e8b9b5e71f31ee227f7b8d83da14355d5b880c0f7e477ecf18e360dd/diff:/var/lib/docker/overlay2/7a0fdb748888accee02d68f26ba7c8f11ed7e0b1727beb633a8a02d41c762b9a/diff:/var/lib/docker/overlay2/90bb359231d0667c6a684304863352970e3854a24adc01da7d96250078f50593/diff:/var/lib/docker/overlay2/14807dd345c571446606fc12542343343734f09216ac0f72ea971e44c51e73bc/diff:/var/lib/docker/overlay2/47675c305fd3157c7d1f94df59badbf171128c371f27df3e460e7616fdebd1a3/diff:/var/lib/docker/overlay2/62eb07949377c89c1ed888624c8d9a730d01e2eb1dd06ff23fc64e28e0cbbc8c/diff:/var/lib/docker/overlay2/a4cd0c6fe63f05907e27cba37dcb62ac7283f4f6f3b3fbfda934216663875735/diff:/var/lib/docker/overlay2/a087ac93bbe33796bd250ecb00de5612af3696f1968d30c9b2390178f17f203b/diff:/var/lib/d
ocker/overlay2/56c471e8e71e11a4ea8a6a241a2406101bce3170edbec4f49d26e9d56894affa/diff:/var/lib/docker/overlay2/7bd516e66d75f51ad75387f0e86ddb6cd9e975fd71c57c5b2dcddf88ad78acd7/diff:/var/lib/docker/overlay2/20fca7abbce0276665d7149fd104aaff2fbce77f78ab0b4ced10bca46257d41c/diff:/var/lib/docker/overlay2/5e10b3d425507f2bf83efc712253364a52c3efa222e08dc12d7ed04e6390fbf8/diff:/var/lib/docker/overlay2/e8efccb1871a01c5184fe7925e831ce2fd83d4f03065027be2901b92ab90a258/diff:/var/lib/docker/overlay2/97780e9d41ac53335964c277f20f6ec8a960ee705fbeb461d6c712bde94de436/diff:/var/lib/docker/overlay2/f72e5826af1361739ac8503fbb88bf39f8efc9ffbb41af7012e3aa815236ed48/diff:/var/lib/docker/overlay2/a40d3ff72764915cfa40dfd926b8f13684764d6c58dd0a3ad4301584bb0998f3/diff:/var/lib/docker/overlay2/5730e50e96fd55a5c5fa6a2d4ebd253a91e71d32989764a774722d2703760ef0/diff:/var/lib/docker/overlay2/b9c7c3ad8aee072ae5cb7e36227bb14403f096a48ad1140bd0c499590859c6cc/diff:/var/lib/docker/overlay2/e66abe381c2911f74ec47698d1bdecfd39268170629b92fd84b37eb2ce7
a5ba6/diff:/var/lib/docker/overlay2/808d85518d0d508262fe102c8c8ada39430d040fe587d8f352d242a7cb41c3d6/diff:/var/lib/docker/overlay2/415facd46666ec88fe94221167e53a0e81eb9d72fef6326abf1a5a62bd28f6a0/diff:/var/lib/docker/overlay2/4d191b6478677781e2f1e74ac65d7957f5b20b908733d882a049a62c2d8f6728/diff:/var/lib/docker/overlay2/76d1a08b1a9c336558d2ef846ee06a10bc3513ce602ecf525e72588e05cee440/diff:/var/lib/docker/overlay2/917a1b8e5732828fa00857e6bc4b53263c418e42154740f628e7ba65ee6296a3/diff:/var/lib/docker/overlay2/80c17b561fc7c2c0512663a0616a853c94e8ca142c247f82e6b22e957d2853ba/diff:/var/lib/docker/overlay2/9b2baf8bcd946ef6986df8016336a5290e46a343ba3a203b45b5eaccf2d19bdf/diff:/var/lib/docker/overlay2/32b8331b04aa0b9282abe09a864255a764f85400b86fb4dbe23f9494e0c3a93c/diff:/var/lib/docker/overlay2/18634eb2fa4552ab5e8a5683b2d1d519cf3ca8fb3d99d345219c89f4d93494b3/diff:/var/lib/docker/overlay2/5bbe864c28792b9309fdb4840452e1a20c13632cf0560091a4204b8ad0c246d3/diff:/var/lib/docker/overlay2/c5589da650ee6241c9902fbf401eaf68da7b60
ab69831f2e9c321d744113f417/diff",
	                "MergedDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/merged",
	                "UpperDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/diff",
	                "WorkDir": "/var/lib/docker/overlay2/07526d52e2e1daf22fd7ffad3c7b505b9063cf2864df54dcbbafbdfe1d92fc29/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-331000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-331000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-331000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-331000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6790ea0f276be9c604217c0826bb2493527579753993635659c34a69f43b6b3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59135"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59136"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59138"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59139"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e6790ea0f276",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-331000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "521ae416df38",
	                        "old-k8s-version-331000"
	                    ],
	                    "NetworkID": "6fa3318f5503a289e050622f1a9f4b0bea1af814ad18d1b409b3b8c51f5769ef",
	                    "EndpointID": "d3e12c3eabea1d71c79fbe06fe901d0e28f2e0e0f2e8ff4418b5e3ffe4c96e09",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 2 (398.401279ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-331000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-331000 logs -n 25: (3.383572826s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-995000                                | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-995000                                | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-995000                                | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	| delete  | -p embed-certs-995000                                | embed-certs-995000           | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	| delete  | -p                                                   | disable-driver-mounts-908000 | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | disable-driver-mounts-908000                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:44 PDT | 30 Mar 23 09:44 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:45 PDT | 30 Mar 23 09:45 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:45 PDT | 30 Mar 23 09:45 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-582000     | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:45 PDT | 30 Mar 23 09:45 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:45 PDT | 30 Mar 23 09:55 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.3                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:55 PDT | 30 Mar 23 09:55 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:55 PDT | 30 Mar 23 09:55 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:55 PDT | 30 Mar 23 09:55 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:55 PDT | 30 Mar 23 09:55 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-582000 | jenkins | v1.29.0 | 30 Mar 23 09:55 PDT | 30 Mar 23 09:55 PDT |
	|         | default-k8s-diff-port-582000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-996000 --memory=2200 --alsologtostderr | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:55 PDT | 30 Mar 23 09:55 PDT |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.0-rc.0   |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-996000           | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:55 PDT | 30 Mar 23 09:55 PDT |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-996000                                 | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:55 PDT | 30 Mar 23 09:56 PDT |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-996000                | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:56 PDT | 30 Mar 23 09:56 PDT |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-996000 --memory=2200 --alsologtostderr | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:56 PDT | 30 Mar 23 09:56 PDT |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.0-rc.0   |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-996000 sudo                            | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:56 PDT | 30 Mar 23 09:56 PDT |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-996000                                 | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:56 PDT | 30 Mar 23 09:56 PDT |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-996000                                 | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:56 PDT | 30 Mar 23 09:56 PDT |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-996000                                 | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:56 PDT | 30 Mar 23 09:56 PDT |
	| delete  | -p newest-cni-996000                                 | newest-cni-996000            | jenkins | v1.29.0 | 30 Mar 23 09:56 PDT | 30 Mar 23 09:56 PDT |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/30 09:56:09
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0330 09:56:09.136480   46383 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:56:09.136632   46383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:56:09.136638   46383 out.go:309] Setting ErrFile to fd 2...
	I0330 09:56:09.136642   46383 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:56:09.136768   46383 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:56:09.138169   46383 out.go:303] Setting JSON to false
	I0330 09:56:09.158360   46383 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10537,"bootTime":1680184832,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 09:56:09.158450   46383 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 09:56:09.179977   46383 out.go:177] * [newest-cni-996000] minikube v1.29.0 on Darwin 13.3
	I0330 09:56:09.221964   46383 notify.go:220] Checking for updates...
	I0330 09:56:09.242996   46383 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 09:56:09.264097   46383 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:56:09.285245   46383 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 09:56:09.306251   46383 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 09:56:09.327290   46383 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 09:56:09.348412   46383 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 09:56:09.370901   46383 config.go:182] Loaded profile config "newest-cni-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0330 09:56:09.371674   46383 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 09:56:09.436315   46383 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 09:56:09.436453   46383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:56:09.623942   46383 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-30 16:56:09.489582668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:56:09.667523   46383 out.go:177] * Using the docker driver based on existing profile
	I0330 09:56:09.688619   46383 start.go:295] selected driver: docker
	I0330 09:56:09.688725   46383 start.go:859] validating driver "docker" against &{Name:newest-cni-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:newest-cni-996000 Namespace:default APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:56:09.688885   46383 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 09:56:09.692922   46383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 09:56:09.882213   46383 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2023-03-30 16:56:09.745708686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 09:56:09.882361   46383 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0330 09:56:09.882379   46383 cni.go:84] Creating CNI manager for ""
	I0330 09:56:09.882392   46383 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:56:09.882402   46383 start_flags.go:319] config:
	{Name:newest-cni-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:newest-cni-996000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:56:09.904387   46383 out.go:177] * Starting control plane node newest-cni-996000 in cluster newest-cni-996000
	I0330 09:56:09.925909   46383 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 09:56:09.946785   46383 out.go:177] * Pulling base image ...
	I0330 09:56:09.988807   46383 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0330 09:56:09.988835   46383 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 09:56:09.988889   46383 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0330 09:56:09.988902   46383 cache.go:57] Caching tarball of preloaded images
	I0330 09:56:09.989036   46383 preload.go:174] Found /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0330 09:56:09.989047   46383 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.0-rc.0 on docker
	I0330 09:56:09.989566   46383 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/config.json ...
	I0330 09:56:10.050592   46383 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon, skipping pull
	I0330 09:56:10.050617   46383 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in daemon, skipping load
	I0330 09:56:10.050639   46383 cache.go:193] Successfully downloaded all kic artifacts
	I0330 09:56:10.050687   46383 start.go:364] acquiring machines lock for newest-cni-996000: {Name:mk93415f2650fd5783247b4bf524aa4b9ecc3fe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0330 09:56:10.050772   46383 start.go:368] acquired machines lock for "newest-cni-996000" in 67.185µs
	I0330 09:56:10.050798   46383 start.go:96] Skipping create...Using existing machine configuration
	I0330 09:56:10.050807   46383 fix.go:55] fixHost starting: 
	I0330 09:56:10.051037   46383 cli_runner.go:164] Run: docker container inspect newest-cni-996000 --format={{.State.Status}}
	I0330 09:56:10.110749   46383 fix.go:103] recreateIfNeeded on newest-cni-996000: state=Stopped err=<nil>
	W0330 09:56:10.110777   46383 fix.go:129] unexpected machine state, will restart: <nil>
	I0330 09:56:10.132679   46383 out.go:177] * Restarting existing docker container for "newest-cni-996000" ...
	I0330 09:56:10.154382   46383 cli_runner.go:164] Run: docker start newest-cni-996000
	I0330 09:56:10.507409   46383 cli_runner.go:164] Run: docker container inspect newest-cni-996000 --format={{.State.Status}}
	I0330 09:56:10.571442   46383 kic.go:426] container "newest-cni-996000" state is running.
	I0330 09:56:10.572078   46383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-996000
	I0330 09:56:10.639841   46383 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/config.json ...
	I0330 09:56:10.640353   46383 machine.go:88] provisioning docker machine ...
	I0330 09:56:10.640388   46383 ubuntu.go:169] provisioning hostname "newest-cni-996000"
	I0330 09:56:10.640488   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:10.717029   46383 main.go:141] libmachine: Using SSH client type: native
	I0330 09:56:10.717564   46383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 60259 <nil> <nil>}
	I0330 09:56:10.717578   46383 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-996000 && echo "newest-cni-996000" | sudo tee /etc/hostname
	I0330 09:56:10.861502   46383 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-996000
	
	I0330 09:56:10.861607   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:10.926780   46383 main.go:141] libmachine: Using SSH client type: native
	I0330 09:56:10.927132   46383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 60259 <nil> <nil>}
	I0330 09:56:10.927145   46383 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-996000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-996000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-996000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0330 09:56:11.046706   46383 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:56:11.046736   46383 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16199-24978/.minikube CaCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16199-24978/.minikube}
	I0330 09:56:11.046758   46383 ubuntu.go:177] setting up certificates
	I0330 09:56:11.046766   46383 provision.go:83] configureAuth start
	I0330 09:56:11.046845   46383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-996000
	I0330 09:56:11.110915   46383 provision.go:138] copyHostCerts
	I0330 09:56:11.111004   46383 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem, removing ...
	I0330 09:56:11.111015   46383 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem
	I0330 09:56:11.111118   46383 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.pem (1078 bytes)
	I0330 09:56:11.111314   46383 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem, removing ...
	I0330 09:56:11.111322   46383 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem
	I0330 09:56:11.111381   46383 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/cert.pem (1123 bytes)
	I0330 09:56:11.111518   46383 exec_runner.go:144] found /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem, removing ...
	I0330 09:56:11.111524   46383 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem
	I0330 09:56:11.111591   46383 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16199-24978/.minikube/key.pem (1675 bytes)
	I0330 09:56:11.111701   46383 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem org=jenkins.newest-cni-996000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-996000]
	I0330 09:56:11.173920   46383 provision.go:172] copyRemoteCerts
	I0330 09:56:11.173998   46383 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0330 09:56:11.174060   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:11.235909   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:11.322770   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0330 09:56:11.340338   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0330 09:56:11.357872   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0330 09:56:11.375354   46383 provision.go:86] duration metric: configureAuth took 328.568799ms
	I0330 09:56:11.375378   46383 ubuntu.go:193] setting minikube options for container-runtime
	I0330 09:56:11.375519   46383 config.go:182] Loaded profile config "newest-cni-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0330 09:56:11.375585   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:11.435871   46383 main.go:141] libmachine: Using SSH client type: native
	I0330 09:56:11.436225   46383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 60259 <nil> <nil>}
	I0330 09:56:11.436235   46383 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0330 09:56:11.556112   46383 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0330 09:56:11.556200   46383 ubuntu.go:71] root file system type: overlay
	I0330 09:56:11.556353   46383 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0330 09:56:11.556505   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:11.617471   46383 main.go:141] libmachine: Using SSH client type: native
	I0330 09:56:11.617821   46383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 60259 <nil> <nil>}
	I0330 09:56:11.617870   46383 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0330 09:56:11.742592   46383 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0330 09:56:11.742691   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:11.811380   46383 main.go:141] libmachine: Using SSH client type: native
	I0330 09:56:11.811730   46383 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140cc00] 0x140fca0 <nil>  [] 0s} 127.0.0.1 60259 <nil> <nil>}
	I0330 09:56:11.811743   46383 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0330 09:56:11.934309   46383 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0330 09:56:11.934328   46383 machine.go:91] provisioned docker machine in 1.293934115s
	I0330 09:56:11.934338   46383 start.go:300] post-start starting for "newest-cni-996000" (driver="docker")
	I0330 09:56:11.934346   46383 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0330 09:56:11.934427   46383 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0330 09:56:11.934482   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:11.995456   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:12.083150   46383 ssh_runner.go:195] Run: cat /etc/os-release
	I0330 09:56:12.086976   46383 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0330 09:56:12.086992   46383 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0330 09:56:12.086998   46383 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0330 09:56:12.087003   46383 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0330 09:56:12.087011   46383 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/addons for local assets ...
	I0330 09:56:12.087100   46383 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16199-24978/.minikube/files for local assets ...
	I0330 09:56:12.087252   46383 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem -> 254482.pem in /etc/ssl/certs
	I0330 09:56:12.087418   46383 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0330 09:56:12.094956   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:56:12.112108   46383 start.go:303] post-start completed in 177.751221ms
	I0330 09:56:12.112192   46383 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:56:12.112255   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:12.172249   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:12.254471   46383 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0330 09:56:12.259604   46383 fix.go:57] fixHost completed within 2.208745561s
	I0330 09:56:12.259619   46383 start.go:83] releasing machines lock for "newest-cni-996000", held for 2.208789444s
	I0330 09:56:12.259717   46383 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-996000
	I0330 09:56:12.320410   46383 ssh_runner.go:195] Run: cat /version.json
	I0330 09:56:12.320419   46383 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0330 09:56:12.320490   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:12.320501   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:12.386765   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:12.386830   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:12.521020   46383 ssh_runner.go:195] Run: systemctl --version
	I0330 09:56:12.526226   46383 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0330 09:56:12.531455   46383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0330 09:56:12.547115   46383 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0330 09:56:12.547181   46383 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0330 09:56:12.555021   46383 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0330 09:56:12.555033   46383 start.go:481] detecting cgroup driver to use...
	I0330 09:56:12.555044   46383 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:56:12.555117   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:56:12.568421   46383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0330 09:56:12.576931   46383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0330 09:56:12.585440   46383 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0330 09:56:12.585499   46383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0330 09:56:12.594177   46383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:56:12.602838   46383 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0330 09:56:12.611422   46383 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0330 09:56:12.620005   46383 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0330 09:56:12.627826   46383 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0330 09:56:12.636411   46383 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0330 09:56:12.643814   46383 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0330 09:56:12.651016   46383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:56:12.722831   46383 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0330 09:56:12.796153   46383 start.go:481] detecting cgroup driver to use...
	I0330 09:56:12.796173   46383 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0330 09:56:12.796240   46383 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0330 09:56:12.807028   46383 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0330 09:56:12.807097   46383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0330 09:56:12.817613   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0330 09:56:12.831993   46383 ssh_runner.go:195] Run: which cri-dockerd
	I0330 09:56:12.836344   46383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0330 09:56:12.844997   46383 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0330 09:56:12.860333   46383 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0330 09:56:12.957331   46383 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0330 09:56:13.045049   46383 docker.go:538] configuring docker to use "cgroupfs" as cgroup driver...
	I0330 09:56:13.045070   46383 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0330 09:56:13.058846   46383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:56:13.148927   46383 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0330 09:56:13.418294   46383 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:56:13.485031   46383 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0330 09:56:13.557941   46383 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0330 09:56:13.628568   46383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:56:13.684938   46383 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0330 09:56:13.718532   46383 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0330 09:56:13.788317   46383 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0330 09:56:13.873826   46383 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0330 09:56:13.873937   46383 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0330 09:56:13.878704   46383 start.go:549] Will wait 60s for crictl version
	I0330 09:56:13.878765   46383 ssh_runner.go:195] Run: which crictl
	I0330 09:56:13.882793   46383 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0330 09:56:13.915374   46383 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0330 09:56:13.915457   46383 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:56:13.941280   46383 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0330 09:56:14.010838   46383 out.go:204] * Preparing Kubernetes v1.27.0-rc.0 on Docker 23.0.1 ...
	I0330 09:56:14.011071   46383 cli_runner.go:164] Run: docker exec -t newest-cni-996000 dig +short host.docker.internal
	I0330 09:56:14.160038   46383 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0330 09:56:14.160212   46383 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0330 09:56:14.165041   46383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:56:14.175205   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:14.257581   46383 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0330 09:56:14.279476   46383 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0330 09:56:14.279636   46383 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:56:14.302418   46383 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.0-rc.0
	registry.k8s.io/kube-controller-manager:v1.27.0-rc.0
	registry.k8s.io/kube-scheduler:v1.27.0-rc.0
	registry.k8s.io/kube-proxy:v1.27.0-rc.0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0330 09:56:14.302441   46383 docker.go:569] Images already preloaded, skipping extraction
	I0330 09:56:14.302521   46383 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0330 09:56:14.322496   46383 docker.go:639] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.0-rc.0
	registry.k8s.io/kube-proxy:v1.27.0-rc.0
	registry.k8s.io/kube-controller-manager:v1.27.0-rc.0
	registry.k8s.io/kube-scheduler:v1.27.0-rc.0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0330 09:56:14.322516   46383 cache_images.go:84] Images are preloaded, skipping loading
	I0330 09:56:14.322599   46383 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0330 09:56:14.348997   46383 cni.go:84] Creating CNI manager for ""
	I0330 09:56:14.349019   46383 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:56:14.349041   46383 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0330 09:56:14.349061   46383 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-996000 NodeName:newest-cni-996000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0330 09:56:14.349222   46383 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-996000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0330 09:56:14.349300   46383 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-996000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.0-rc.0 ClusterName:newest-cni-996000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0330 09:56:14.349361   46383 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.0-rc.0
	I0330 09:56:14.357862   46383 binaries.go:44] Found k8s binaries, skipping transfer
	I0330 09:56:14.357922   46383 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0330 09:56:14.365659   46383 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (420 bytes)
	I0330 09:56:14.378586   46383 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0330 09:56:14.391872   46383 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0330 09:56:14.405024   46383 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0330 09:56:14.408929   46383 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0330 09:56:14.418948   46383 certs.go:56] Setting up /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000 for IP: 192.168.67.2
	I0330 09:56:14.418966   46383 certs.go:186] acquiring lock for shared ca certs: {Name:mk7bf1a10342abaa451a7b833b2ba3f85c81aeda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:56:14.419134   46383 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key
	I0330 09:56:14.419188   46383 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key
	I0330 09:56:14.419273   46383 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/client.key
	I0330 09:56:14.419332   46383 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/apiserver.key.c7fa3a9e
	I0330 09:56:14.419386   46383 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/proxy-client.key
	I0330 09:56:14.419590   46383 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem (1338 bytes)
	W0330 09:56:14.419629   46383 certs.go:397] ignoring /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448_empty.pem, impossibly tiny 0 bytes
	I0330 09:56:14.419640   46383 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca-key.pem (1679 bytes)
	I0330 09:56:14.419675   46383 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/ca.pem (1078 bytes)
	I0330 09:56:14.419712   46383 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/cert.pem (1123 bytes)
	I0330 09:56:14.419762   46383 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/certs/key.pem (1675 bytes)
	I0330 09:56:14.419838   46383 certs.go:401] found cert: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem (1708 bytes)
	I0330 09:56:14.420376   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0330 09:56:14.438632   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0330 09:56:14.456615   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0330 09:56:14.475080   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/newest-cni-996000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0330 09:56:14.493847   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0330 09:56:14.511330   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0330 09:56:14.529566   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0330 09:56:14.547380   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0330 09:56:14.564916   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0330 09:56:14.582594   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/certs/25448.pem --> /usr/share/ca-certificates/25448.pem (1338 bytes)
	I0330 09:56:14.600252   46383 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/ssl/certs/254482.pem --> /usr/share/ca-certificates/254482.pem (1708 bytes)
	I0330 09:56:14.617916   46383 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0330 09:56:14.631011   46383 ssh_runner.go:195] Run: openssl version
	I0330 09:56:14.636620   46383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/254482.pem && ln -fs /usr/share/ca-certificates/254482.pem /etc/ssl/certs/254482.pem"
	I0330 09:56:14.645107   46383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/254482.pem
	I0330 09:56:14.649049   46383 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 30 15:43 /usr/share/ca-certificates/254482.pem
	I0330 09:56:14.649097   46383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/254482.pem
	I0330 09:56:14.654532   46383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/254482.pem /etc/ssl/certs/3ec20f2e.0"
	I0330 09:56:14.662369   46383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0330 09:56:14.670659   46383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:56:14.674702   46383 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 30 15:39 /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:56:14.674744   46383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0330 09:56:14.680081   46383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0330 09:56:14.687564   46383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25448.pem && ln -fs /usr/share/ca-certificates/25448.pem /etc/ssl/certs/25448.pem"
	I0330 09:56:14.695674   46383 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25448.pem
	I0330 09:56:14.699910   46383 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 30 15:43 /usr/share/ca-certificates/25448.pem
	I0330 09:56:14.699953   46383 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25448.pem
	I0330 09:56:14.705624   46383 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25448.pem /etc/ssl/certs/51391683.0"
	I0330 09:56:14.713209   46383 kubeadm.go:401] StartCluster: {Name:newest-cni-996000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:newest-cni-996000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mul
tiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 09:56:14.713343   46383 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:56:14.733717   46383 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0330 09:56:14.741838   46383 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0330 09:56:14.741852   46383 kubeadm.go:633] restartCluster start
	I0330 09:56:14.741900   46383 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0330 09:56:14.749221   46383 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:14.749286   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:14.809768   46383 kubeconfig.go:135] verify returned: extract IP: "newest-cni-996000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:56:14.809926   46383 kubeconfig.go:146] "newest-cni-996000" context is missing from /Users/jenkins/minikube-integration/16199-24978/kubeconfig - will repair!
	I0330 09:56:14.810276   46383 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:56:14.811808   46383 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0330 09:56:14.819903   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:14.819960   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:14.828665   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:15.330783   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:15.331004   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:15.342424   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:15.829312   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:15.829443   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:15.840852   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:16.329330   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:16.329512   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:16.340752   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:16.829288   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:16.829420   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:16.840679   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:17.330126   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:17.330275   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:17.340629   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:17.829675   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:17.829837   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:17.841087   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:18.330888   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:18.331079   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:18.342187   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:18.830883   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:18.831033   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:18.842372   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:19.329252   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:19.329430   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:19.340939   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:19.830860   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:19.831004   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:19.842287   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:20.330893   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:20.331085   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:20.342475   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:20.829994   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:20.830181   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:20.841320   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:21.330626   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:21.330742   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:21.341913   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:21.830258   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:21.830420   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:21.841778   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:22.329368   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:22.329523   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:22.340754   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:22.829872   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:22.830034   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:22.841275   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:23.331021   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:23.331183   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:23.341107   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:23.829038   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:23.829180   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:23.839283   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:24.329820   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:24.329981   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:24.340638   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:24.831009   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:24.831161   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:24.842381   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:24.842395   46383 api_server.go:165] Checking apiserver status ...
	I0330 09:56:24.842458   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0330 09:56:24.851280   46383 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:24.851292   46383 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0330 09:56:24.851300   46383 kubeadm.go:1120] stopping kube-system containers ...
	I0330 09:56:24.851366   46383 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0330 09:56:24.873868   46383 docker.go:465] Stopping containers: [3e7dbbdafe5b beaba07b41fd 9dfc614c912c a977a0de8f2a 21d2abfd46c2 0fe4874317ec 415ab9a2bb30 17104eec6647 56253dab9a99 16af1a12ac45 3142cb280de1 aecb28e25b7c 18d3bddf9331 74ebcb04e4b0 2ef757b38f14 a0cda803e38f 42a0c1e1e231]
	I0330 09:56:24.873961   46383 ssh_runner.go:195] Run: docker stop 3e7dbbdafe5b beaba07b41fd 9dfc614c912c a977a0de8f2a 21d2abfd46c2 0fe4874317ec 415ab9a2bb30 17104eec6647 56253dab9a99 16af1a12ac45 3142cb280de1 aecb28e25b7c 18d3bddf9331 74ebcb04e4b0 2ef757b38f14 a0cda803e38f 42a0c1e1e231
	I0330 09:56:24.894444   46383 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0330 09:56:24.905062   46383 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0330 09:56:24.912902   46383 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Mar 30 16:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Mar 30 16:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Mar 30 16:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Mar 30 16:55 /etc/kubernetes/scheduler.conf
	
	I0330 09:56:24.912956   46383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0330 09:56:24.920379   46383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0330 09:56:24.927887   46383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0330 09:56:24.935771   46383 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:24.935836   46383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0330 09:56:24.943398   46383 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0330 09:56:24.951323   46383 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:56:24.951387   46383 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0330 09:56:24.959183   46383 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0330 09:56:24.967508   46383 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0330 09:56:24.967524   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:56:25.017372   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:56:25.576663   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:56:25.693608   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:56:25.750970   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:56:25.838399   46383 api_server.go:51] waiting for apiserver process to appear ...
	I0330 09:56:25.838483   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:56:26.349379   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:56:26.848676   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:56:26.860983   46383 api_server.go:71] duration metric: took 1.022564414s to wait for apiserver process to appear ...
	I0330 09:56:26.860999   46383 api_server.go:87] waiting for apiserver healthz status ...
	I0330 09:56:26.861013   46383 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60258/healthz ...
	I0330 09:56:26.862201   46383 api_server.go:268] stopped: https://127.0.0.1:60258/healthz: Get "https://127.0.0.1:60258/healthz": EOF
	I0330 09:56:27.363127   46383 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60258/healthz ...
	I0330 09:56:29.356846   46383 api_server.go:278] https://127.0.0.1:60258/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0330 09:56:29.356873   46383 api_server.go:102] status: https://127.0.0.1:60258/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0330 09:56:29.362369   46383 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60258/healthz ...
	I0330 09:56:29.371620   46383 api_server.go:278] https://127.0.0.1:60258/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0330 09:56:29.371637   46383 api_server.go:102] status: https://127.0.0.1:60258/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0330 09:56:29.862433   46383 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60258/healthz ...
	I0330 09:56:29.868623   46383 api_server.go:278] https://127.0.0.1:60258/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0330 09:56:29.868648   46383 api_server.go:102] status: https://127.0.0.1:60258/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:56:30.362394   46383 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60258/healthz ...
	I0330 09:56:30.369023   46383 api_server.go:278] https://127.0.0.1:60258/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0330 09:56:30.369053   46383 api_server.go:102] status: https://127.0.0.1:60258/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:56:30.862435   46383 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60258/healthz ...
	I0330 09:56:30.869187   46383 api_server.go:278] https://127.0.0.1:60258/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0330 09:56:30.869222   46383 api_server.go:102] status: https://127.0.0.1:60258/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0330 09:56:31.362726   46383 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60258/healthz ...
	I0330 09:56:31.369240   46383 api_server.go:278] https://127.0.0.1:60258/healthz returned 200:
	ok
	I0330 09:56:31.376523   46383 api_server.go:140] control plane version: v1.27.0-rc.0
	I0330 09:56:31.376534   46383 api_server.go:130] duration metric: took 4.515428727s to wait for apiserver health ...
	I0330 09:56:31.376542   46383 cni.go:84] Creating CNI manager for ""
	I0330 09:56:31.376551   46383 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 09:56:31.401290   46383 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0330 09:56:31.423279   46383 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0330 09:56:31.433333   46383 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0330 09:56:31.447072   46383 system_pods.go:43] waiting for kube-system pods to appear ...
	I0330 09:56:31.454899   46383 system_pods.go:59] 8 kube-system pods found
	I0330 09:56:31.454918   46383 system_pods.go:61] "coredns-5d78c9869d-g2s5p" [453114c1-b206-40c2-8be9-8e23e472aead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0330 09:56:31.454926   46383 system_pods.go:61] "etcd-newest-cni-996000" [4f5439fb-285f-4be2-b690-3e1d0a7f83e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0330 09:56:31.454932   46383 system_pods.go:61] "kube-apiserver-newest-cni-996000" [bcd852c4-597b-494f-a086-d3d7605b4e25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0330 09:56:31.454937   46383 system_pods.go:61] "kube-controller-manager-newest-cni-996000" [b1066665-08d0-44dc-8e2e-683d2457443b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0330 09:56:31.454942   46383 system_pods.go:61] "kube-proxy-sj62f" [0257c3d1-99b9-43e6-86cb-ee5d1e390f96] Running
	I0330 09:56:31.454947   46383 system_pods.go:61] "kube-scheduler-newest-cni-996000" [c554f01b-850e-4c45-87fa-19246f8edd3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0330 09:56:31.454956   46383 system_pods.go:61] "metrics-server-544b559666-68cpp" [d4720486-f499-42b1-939e-654f529d8348] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0330 09:56:31.454961   46383 system_pods.go:61] "storage-provisioner" [40374a44-8de5-4bf8-af54-dd826d5c439c] Running
	I0330 09:56:31.454965   46383 system_pods.go:74] duration metric: took 7.880469ms to wait for pod list to return data ...
	I0330 09:56:31.454971   46383 node_conditions.go:102] verifying NodePressure condition ...
	I0330 09:56:31.457819   46383 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0330 09:56:31.457834   46383 node_conditions.go:123] node cpu capacity is 6
	I0330 09:56:31.457844   46383 node_conditions.go:105] duration metric: took 2.869957ms to run NodePressure ...
	I0330 09:56:31.457859   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.0-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0330 09:56:31.615164   46383 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0330 09:56:31.641744   46383 ops.go:34] apiserver oom_adj: -16
	I0330 09:56:31.641759   46383 kubeadm.go:637] restartCluster took 16.899518788s
	I0330 09:56:31.641764   46383 kubeadm.go:403] StartCluster complete in 16.928181081s
	I0330 09:56:31.641779   46383 settings.go:142] acquiring lock: {Name:mkee06510b0682aea765fc9cbf62cdda0355bccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:56:31.641860   46383 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 09:56:31.642461   46383 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/kubeconfig: {Name:mkc78ec33a5f0be01e56d3b4ff748b88bdaf2a65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 09:56:31.642716   46383 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0330 09:56:31.642725   46383 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0330 09:56:31.642795   46383 addons.go:66] Setting storage-provisioner=true in profile "newest-cni-996000"
	I0330 09:56:31.642799   46383 addons.go:66] Setting default-storageclass=true in profile "newest-cni-996000"
	I0330 09:56:31.642810   46383 addons.go:228] Setting addon storage-provisioner=true in "newest-cni-996000"
	W0330 09:56:31.642818   46383 addons.go:237] addon storage-provisioner should already be in state true
	I0330 09:56:31.642833   46383 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-996000"
	I0330 09:56:31.642846   46383 addons.go:66] Setting dashboard=true in profile "newest-cni-996000"
	I0330 09:56:31.642867   46383 host.go:66] Checking if "newest-cni-996000" exists ...
	I0330 09:56:31.642881   46383 config.go:182] Loaded profile config "newest-cni-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.0-rc.0
	I0330 09:56:31.643292   46383 addons.go:228] Setting addon dashboard=true in "newest-cni-996000"
	W0330 09:56:31.643344   46383 addons.go:237] addon dashboard should already be in state true
	I0330 09:56:31.643336   46383 addons.go:66] Setting metrics-server=true in profile "newest-cni-996000"
	I0330 09:56:31.643363   46383 addons.go:228] Setting addon metrics-server=true in "newest-cni-996000"
	W0330 09:56:31.643374   46383 addons.go:237] addon metrics-server should already be in state true
	I0330 09:56:31.643424   46383 host.go:66] Checking if "newest-cni-996000" exists ...
	I0330 09:56:31.643432   46383 host.go:66] Checking if "newest-cni-996000" exists ...
	I0330 09:56:31.643913   46383 cli_runner.go:164] Run: docker container inspect newest-cni-996000 --format={{.State.Status}}
	I0330 09:56:31.644055   46383 cli_runner.go:164] Run: docker container inspect newest-cni-996000 --format={{.State.Status}}
	I0330 09:56:31.644089   46383 cli_runner.go:164] Run: docker container inspect newest-cni-996000 --format={{.State.Status}}
	I0330 09:56:31.644124   46383 cli_runner.go:164] Run: docker container inspect newest-cni-996000 --format={{.State.Status}}
	I0330 09:56:31.655401   46383 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-996000" context rescaled to 1 replicas
	I0330 09:56:31.655476   46383 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0330 09:56:31.692634   46383 out.go:177] * Verifying Kubernetes components...
	I0330 09:56:31.715545   46383 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:56:31.746067   46383 addons.go:228] Setting addon default-storageclass=true in "newest-cni-996000"
	I0330 09:56:31.779477   46383 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0330 09:56:31.779487   46383 addons.go:237] addon default-storageclass should already be in state true
	I0330 09:56:31.749383   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:31.758549   46383 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0330 09:56:31.800406   46383 host.go:66] Checking if "newest-cni-996000" exists ...
	I0330 09:56:31.821784   46383 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0330 09:56:31.749294   46383 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0330 09:56:31.800456   46383 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:56:31.801374   46383 cli_runner.go:164] Run: docker container inspect newest-cni-996000 --format={{.State.Status}}
	I0330 09:56:31.821847   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0330 09:56:31.842616   46383 addons.go:420] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0330 09:56:31.863384   46383 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0330 09:56:31.842684   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:31.863425   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0330 09:56:31.884625   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0330 09:56:31.884637   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0330 09:56:31.884668   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:31.884702   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:31.901695   46383 api_server.go:51] waiting for apiserver process to appear ...
	I0330 09:56:31.901822   46383 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:56:31.902968   46383 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0330 09:56:31.902983   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0330 09:56:31.903098   46383 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-996000
	I0330 09:56:31.915809   46383 api_server.go:71] duration metric: took 260.292318ms to wait for apiserver process to appear ...
	I0330 09:56:31.915837   46383 api_server.go:87] waiting for apiserver healthz status ...
	I0330 09:56:31.915851   46383 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60258/healthz ...
	I0330 09:56:31.925316   46383 api_server.go:278] https://127.0.0.1:60258/healthz returned 200:
	ok
	I0330 09:56:31.927559   46383 api_server.go:140] control plane version: v1.27.0-rc.0
	I0330 09:56:31.927572   46383 api_server.go:130] duration metric: took 11.730001ms to wait for apiserver health ...
	I0330 09:56:31.927578   46383 system_pods.go:43] waiting for kube-system pods to appear ...
	I0330 09:56:31.934929   46383 system_pods.go:59] 8 kube-system pods found
	I0330 09:56:31.934950   46383 system_pods.go:61] "coredns-5d78c9869d-g2s5p" [453114c1-b206-40c2-8be9-8e23e472aead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0330 09:56:31.934968   46383 system_pods.go:61] "etcd-newest-cni-996000" [4f5439fb-285f-4be2-b690-3e1d0a7f83e6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0330 09:56:31.934991   46383 system_pods.go:61] "kube-apiserver-newest-cni-996000" [bcd852c4-597b-494f-a086-d3d7605b4e25] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0330 09:56:31.935010   46383 system_pods.go:61] "kube-controller-manager-newest-cni-996000" [b1066665-08d0-44dc-8e2e-683d2457443b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0330 09:56:31.935023   46383 system_pods.go:61] "kube-proxy-sj62f" [0257c3d1-99b9-43e6-86cb-ee5d1e390f96] Running
	I0330 09:56:31.935029   46383 system_pods.go:61] "kube-scheduler-newest-cni-996000" [c554f01b-850e-4c45-87fa-19246f8edd3e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0330 09:56:31.935034   46383 system_pods.go:61] "metrics-server-544b559666-68cpp" [d4720486-f499-42b1-939e-654f529d8348] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0330 09:56:31.935047   46383 system_pods.go:61] "storage-provisioner" [40374a44-8de5-4bf8-af54-dd826d5c439c] Running
	I0330 09:56:31.935061   46383 system_pods.go:74] duration metric: took 7.469796ms to wait for pod list to return data ...
	I0330 09:56:31.935067   46383 default_sa.go:34] waiting for default service account to be created ...
	I0330 09:56:31.939097   46383 default_sa.go:45] found service account: "default"
	I0330 09:56:31.939111   46383 default_sa.go:55] duration metric: took 4.028835ms for default service account to be created ...
	I0330 09:56:31.939118   46383 kubeadm.go:578] duration metric: took 283.610067ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0330 09:56:31.939139   46383 node_conditions.go:102] verifying NodePressure condition ...
	I0330 09:56:31.943547   46383 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0330 09:56:31.943561   46383 node_conditions.go:123] node cpu capacity is 6
	I0330 09:56:31.943574   46383 node_conditions.go:105] duration metric: took 4.427863ms to run NodePressure ...
	I0330 09:56:31.943584   46383 start.go:228] waiting for startup goroutines ...
	I0330 09:56:31.969809   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:31.971989   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:31.972785   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:31.988599   46383 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60259 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/newest-cni-996000/id_rsa Username:docker}
	I0330 09:56:32.067105   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0330 09:56:32.067117   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0330 09:56:32.068820   46383 addons.go:420] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0330 09:56:32.068832   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0330 09:56:32.069052   46383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0330 09:56:32.084056   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0330 09:56:32.084072   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0330 09:56:32.085112   46383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0330 09:56:32.086413   46383 addons.go:420] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0330 09:56:32.086431   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0330 09:56:32.107190   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0330 09:56:32.107241   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0330 09:56:32.135473   46383 addons.go:420] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0330 09:56:32.135492   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0330 09:56:32.148771   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0330 09:56:32.148784   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0330 09:56:32.155415   46383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0330 09:56:32.168395   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0330 09:56:32.168420   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0330 09:56:32.244300   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0330 09:56:32.244325   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0330 09:56:32.268869   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0330 09:56:32.268891   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0330 09:56:32.357422   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0330 09:56:32.357441   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0330 09:56:32.436508   46383 addons.go:420] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0330 09:56:32.436528   46383 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0330 09:56:32.461346   46383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0330 09:56:33.281774   46383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.212670392s)
	I0330 09:56:33.281848   46383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.196692691s)
	W0330 09:56:33.281869   46383 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR; received from peer
	I0330 09:56:33.281905   46383 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.126441728s)
	I0330 09:56:33.281922   46383 retry.go:31] will retry after 306.297003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR; received from peer
	I0330 09:56:33.281926   46383 addons.go:464] Verifying addon metrics-server=true in "newest-cni-996000"
	I0330 09:56:33.425290   46383 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-996000 addons enable metrics-server	
	
	
	I0330 09:56:33.589216   46383 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.0-rc.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0330 09:56:33.830719   46383 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0330 09:56:33.872674   46383 addons.go:499] enable addons completed in 2.22987042s: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0330 09:56:33.872707   46383 start.go:233] waiting for cluster config update ...
	I0330 09:56:33.872731   46383 start.go:242] writing updated cluster config ...
	I0330 09:56:33.873134   46383 ssh_runner.go:195] Run: rm -f paused
	I0330 09:56:33.913694   46383 start.go:557] kubectl: 1.25.4, cluster: 1.27.0-rc.0 (minor skew: 2)
	I0330 09:56:33.934606   46383 out.go:177] 
	W0330 09:56:33.955740   46383 out.go:239] ! /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.27.0-rc.0.
	I0330 09:56:33.976618   46383 out.go:177]   - Want kubectl v1.27.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0330 09:56:34.039698   46383 out.go:177] * Done! kubectl is now configured to use "newest-cni-996000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-03-30 16:35:24 UTC, end at Thu 2023-03-30 17:02:26 UTC. --
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.963817851Z" level=info msg="[core] [Channel #1] Channel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.964168371Z" level=info msg="[core] [Channel #1 SubChannel #2] Subchannel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.964356862Z" level=info msg="[core] [Channel #1] Channel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965081236Z" level=info msg="[core] [Channel #4] Channel created" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965125069Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965140127Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965148593Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965194685Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965214733Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965244480Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965263887Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965280627Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965327664Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965654187Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.965729174Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.966348901Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 30 16:35:26 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:26.974662847Z" level=info msg="Loading containers: start."
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.056519280Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.091396285Z" level=info msg="Loading containers: done."
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.099863638Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.099926662Z" level=info msg="Daemon has completed initialization"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.120983112Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Mar 30 16:35:27 old-k8s-version-331000 systemd[1]: Started Docker Application Container Engine.
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.127843061Z" level=info msg="API listen on [::]:2376"
	Mar 30 16:35:27 old-k8s-version-331000 dockerd[644]: time="2023-03-30T16:35:27.130601270Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-03-30T17:02:29Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Mar30 16:31] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	* 
	* ==> kernel <==
	*  17:02:29 up  3:01,  0 users,  load average: 0.30, 0.49, 0.80
	Linux old-k8s-version-331000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-03-30 16:35:24 UTC, end at Thu 2023-03-30 17:02:29 UTC. --
	Mar 30 17:02:27 old-k8s-version-331000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33985]: I0330 17:02:28.016928   33985 server.go:410] Version: v1.16.0
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33985]: I0330 17:02:28.017184   33985 plugins.go:100] No cloud provider specified.
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33985]: I0330 17:02:28.017198   33985 server.go:773] Client rotation is on, will bootstrap in background
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33985]: I0330 17:02:28.018943   33985 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33985]: W0330 17:02:28.019747   33985 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33985]: W0330 17:02:28.019821   33985 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33985]: F0330 17:02:28.019845   33985 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 30 17:02:28 old-k8s-version-331000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 30 17:02:28 old-k8s-version-331000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 30 17:02:28 old-k8s-version-331000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Mar 30 17:02:28 old-k8s-version-331000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 30 17:02:28 old-k8s-version-331000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33997]: I0330 17:02:28.768453   33997 server.go:410] Version: v1.16.0
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33997]: I0330 17:02:28.768711   33997 plugins.go:100] No cloud provider specified.
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33997]: I0330 17:02:28.768748   33997 server.go:773] Client rotation is on, will bootstrap in background
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33997]: I0330 17:02:28.770454   33997 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33997]: W0330 17:02:28.771152   33997 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33997]: W0330 17:02:28.771221   33997 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Mar 30 17:02:28 old-k8s-version-331000 kubelet[33997]: F0330 17:02:28.771249   33997 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Mar 30 17:02:28 old-k8s-version-331000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 30 17:02:28 old-k8s-version-331000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 30 17:02:29 old-k8s-version-331000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Mar 30 17:02:29 old-k8s-version-331000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 30 17:02:29 old-k8s-version-331000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 10:02:29.228567   46963 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 2 (396.069747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-331000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.72s)

                                                
                                    

Test pass (283/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 25.87
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.26.3/json-events 19.1
11 TestDownloadOnly/v1.26.3/preload-exists 0
14 TestDownloadOnly/v1.26.3/kubectl 0
15 TestDownloadOnly/v1.26.3/LogsDuration 0.29
17 TestDownloadOnly/v1.27.0-rc.0/json-events 19.96
18 TestDownloadOnly/v1.27.0-rc.0/preload-exists 0
21 TestDownloadOnly/v1.27.0-rc.0/kubectl 0
22 TestDownloadOnly/v1.27.0-rc.0/LogsDuration 0.29
23 TestDownloadOnly/DeleteAll 0.69
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
25 TestDownloadOnlyKic 2.14
26 TestBinaryMirror 1.8
27 TestOffline 47.06
29 TestAddons/Setup 158.28
33 TestAddons/parallel/MetricsServer 5.98
34 TestAddons/parallel/HelmTiller 11.92
36 TestAddons/parallel/CSI 49.28
37 TestAddons/parallel/Headlamp 11.31
38 TestAddons/parallel/CloudSpanner 5.54
41 TestAddons/serial/GCPAuth/Namespaces 0.11
42 TestAddons/StoppedEnableDisable 11.54
43 TestCertOptions 29.07
44 TestCertExpiration 241.5
45 TestDockerFlags 30.96
46 TestForceSystemdFlag 30.6
47 TestForceSystemdEnv 32.81
49 TestHyperKitDriverInstallOrUpdate 6.8
52 TestErrorSpam/setup 25.77
53 TestErrorSpam/start 2.81
54 TestErrorSpam/status 1.25
55 TestErrorSpam/pause 1.77
56 TestErrorSpam/unpause 1.81
57 TestErrorSpam/stop 11.55
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 49.56
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 37.45
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 8.37
69 TestFunctional/serial/CacheCmd/cache/add_local 1.7
70 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
71 TestFunctional/serial/CacheCmd/cache/list 0.07
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.42
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.93
74 TestFunctional/serial/CacheCmd/cache/delete 0.14
75 TestFunctional/serial/MinikubeKubectlCmd 0.53
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.7
77 TestFunctional/serial/ExtraConfig 44.72
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 3.09
80 TestFunctional/serial/LogsFileCmd 3.06
82 TestFunctional/parallel/ConfigCmd 0.44
83 TestFunctional/parallel/DashboardCmd 13.79
84 TestFunctional/parallel/DryRun 2.1
85 TestFunctional/parallel/InternationalLanguage 0.78
86 TestFunctional/parallel/StatusCmd 1.26
91 TestFunctional/parallel/AddonsCmd 0.24
92 TestFunctional/parallel/PersistentVolumeClaim 30.49
94 TestFunctional/parallel/SSHCmd 0.83
95 TestFunctional/parallel/CpCmd 1.99
96 TestFunctional/parallel/MySQL 27.05
97 TestFunctional/parallel/FileSync 0.44
98 TestFunctional/parallel/CertSync 2.74
102 TestFunctional/parallel/NodeLabels 0.07
104 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
106 TestFunctional/parallel/License 0.97
107 TestFunctional/parallel/Version/short 0.12
108 TestFunctional/parallel/Version/components 1.06
109 TestFunctional/parallel/ImageCommands/ImageListShort 0.38
110 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
111 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
112 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
113 TestFunctional/parallel/ImageCommands/ImageBuild 8.79
114 TestFunctional/parallel/ImageCommands/Setup 2.83
115 TestFunctional/parallel/DockerEnv/bash 2.08
116 TestFunctional/parallel/UpdateContextCmd/no_changes 0.31
117 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.4
118 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.87
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.62
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.18
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.1
123 TestFunctional/parallel/ImageCommands/ImageRemove 0.81
124 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.11
125 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.6
126 TestFunctional/parallel/ServiceCmd/DeployApp 15.13
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
132 TestFunctional/parallel/ServiceCmd/List 0.63
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
134 TestFunctional/parallel/ServiceCmd/HTTPS 15
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
141 TestFunctional/parallel/ServiceCmd/Format 15
142 TestFunctional/parallel/ServiceCmd/URL 15
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
144 TestFunctional/parallel/ProfileCmd/profile_list 0.48
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
146 TestFunctional/parallel/MountCmd/any-port 10.71
147 TestFunctional/parallel/MountCmd/specific-port 2.76
148 TestFunctional/delete_addon-resizer_images 0.16
149 TestFunctional/delete_my-image_image 0.06
150 TestFunctional/delete_minikube_cached_images 0.06
154 TestImageBuild/serial/NormalBuild 2.29
155 TestImageBuild/serial/BuildWithBuildArg 0.97
156 TestImageBuild/serial/BuildWithDockerIgnore 0.47
157 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.41
167 TestJSONOutput/start/Command 40.51
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.62
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.6
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 5.73
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.76
192 TestKicCustomNetwork/create_custom_network 27.78
193 TestKicCustomNetwork/use_default_bridge_network 27.48
194 TestKicExistingNetwork 27.06
195 TestKicCustomSubnet 27.25
196 TestKicStaticIP 27.17
197 TestMainNoArgs 0.07
198 TestMinikubeProfile 57.38
201 TestMountStart/serial/StartWithMountFirst 8.38
202 TestMountStart/serial/VerifyMountFirst 0.4
203 TestMountStart/serial/StartWithMountSecond 8.62
204 TestMountStart/serial/VerifyMountSecond 0.41
205 TestMountStart/serial/DeleteFirst 2.17
206 TestMountStart/serial/VerifyMountPostDelete 0.4
207 TestMountStart/serial/Stop 1.59
208 TestMountStart/serial/RestartStopped 6.17
209 TestMountStart/serial/VerifyMountPostStop 0.4
212 TestMultiNode/serial/FreshStart2Nodes 102.29
213 TestMultiNode/serial/DeployApp2Nodes 38.56
214 TestMultiNode/serial/PingHostFrom2Pods 0.84
215 TestMultiNode/serial/AddNode 20.21
216 TestMultiNode/serial/ProfileList 0.44
217 TestMultiNode/serial/CopyFile 14.72
218 TestMultiNode/serial/StopNode 3.08
219 TestMultiNode/serial/StartAfterStop 10.54
220 TestMultiNode/serial/RestartKeepsNodes 88.76
221 TestMultiNode/serial/DeleteNode 6.25
222 TestMultiNode/serial/StopMultiNode 21.98
223 TestMultiNode/serial/RestartMultiNode 51.92
224 TestMultiNode/serial/ValidateNameConflict 29.52
228 TestPreload 141.97
230 TestScheduledStopUnix 99.04
233 TestInsufficientStorage 14.61
249 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 16.67
250 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 15.95
251 TestStoppedBinaryUpgrade/Setup 4.39
253 TestStoppedBinaryUpgrade/MinikubeLogs 3.66
255 TestPause/serial/Start 50.95
256 TestPause/serial/SecondStartNoReconfiguration 42.19
257 TestPause/serial/Pause 0.74
258 TestPause/serial/VerifyStatus 0.41
259 TestPause/serial/Unpause 0.65
260 TestPause/serial/PauseAgain 0.73
261 TestPause/serial/DeletePaused 2.65
262 TestPause/serial/VerifyDeletedResources 0.57
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.37
272 TestNoKubernetes/serial/StartWithK8s 25.78
273 TestNoKubernetes/serial/StartWithStopK8s 9.03
274 TestNoKubernetes/serial/Start 7.33
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
276 TestNoKubernetes/serial/ProfileList 1.39
277 TestNoKubernetes/serial/Stop 1.6
278 TestNoKubernetes/serial/StartNoArgs 5.12
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
280 TestNetworkPlugins/group/auto/Start 44.13
281 TestNetworkPlugins/group/auto/KubeletFlags 0.41
282 TestNetworkPlugins/group/auto/NetCatPod 12.19
283 TestNetworkPlugins/group/auto/DNS 0.13
284 TestNetworkPlugins/group/auto/Localhost 0.11
285 TestNetworkPlugins/group/auto/HairPin 0.12
286 TestNetworkPlugins/group/kindnet/Start 54.76
287 TestNetworkPlugins/group/calico/Start 69.06
288 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
289 TestNetworkPlugins/group/kindnet/KubeletFlags 0.5
290 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
291 TestNetworkPlugins/group/kindnet/DNS 0.14
292 TestNetworkPlugins/group/kindnet/Localhost 0.13
293 TestNetworkPlugins/group/kindnet/HairPin 0.12
294 TestNetworkPlugins/group/custom-flannel/Start 57.34
295 TestNetworkPlugins/group/calico/ControllerPod 5.02
296 TestNetworkPlugins/group/calico/KubeletFlags 0.41
297 TestNetworkPlugins/group/calico/NetCatPod 13.22
298 TestNetworkPlugins/group/calico/DNS 0.16
299 TestNetworkPlugins/group/calico/Localhost 0.13
300 TestNetworkPlugins/group/calico/HairPin 0.13
301 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.47
302 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.24
303 TestNetworkPlugins/group/false/Start 42.02
304 TestNetworkPlugins/group/custom-flannel/DNS 0.16
305 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
306 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
307 TestNetworkPlugins/group/enable-default-cni/Start 43.11
308 TestNetworkPlugins/group/false/KubeletFlags 0.45
309 TestNetworkPlugins/group/false/NetCatPod 12.22
310 TestNetworkPlugins/group/false/DNS 0.12
311 TestNetworkPlugins/group/false/Localhost 0.11
312 TestNetworkPlugins/group/false/HairPin 0.12
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.22
315 TestNetworkPlugins/group/flannel/Start 55.89
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
319 TestNetworkPlugins/group/bridge/Start 44.02
320 TestNetworkPlugins/group/flannel/ControllerPod 5.02
321 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
322 TestNetworkPlugins/group/flannel/NetCatPod 13.2
323 TestNetworkPlugins/group/flannel/DNS 0.13
324 TestNetworkPlugins/group/flannel/Localhost 0.12
325 TestNetworkPlugins/group/flannel/HairPin 0.11
326 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
327 TestNetworkPlugins/group/bridge/NetCatPod 13.2
328 TestNetworkPlugins/group/bridge/DNS 0.14
329 TestNetworkPlugins/group/bridge/Localhost 0.12
330 TestNetworkPlugins/group/bridge/HairPin 0.13
331 TestNetworkPlugins/group/kubenet/Start 45.74
334 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
335 TestNetworkPlugins/group/kubenet/NetCatPod 12.19
336 TestNetworkPlugins/group/kubenet/DNS 0.12
337 TestNetworkPlugins/group/kubenet/Localhost 0.11
338 TestNetworkPlugins/group/kubenet/HairPin 0.11
340 TestStartStop/group/no-preload/serial/FirstStart 66.63
341 TestStartStop/group/no-preload/serial/DeployApp 9.29
342 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
343 TestStartStop/group/no-preload/serial/Stop 11.01
344 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
345 TestStartStop/group/no-preload/serial/SecondStart 304.44
348 TestStartStop/group/old-k8s-version/serial/Stop 1.63
349 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.38
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 16.02
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
354 TestStartStop/group/no-preload/serial/Pause 3.18
356 TestStartStop/group/embed-certs/serial/FirstStart 50.8
357 TestStartStop/group/embed-certs/serial/DeployApp 10.28
358 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.86
359 TestStartStop/group/embed-certs/serial/Stop 10.94
360 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.37
361 TestStartStop/group/embed-certs/serial/SecondStart 313.12
363 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.02
364 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
365 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
366 TestStartStop/group/embed-certs/serial/Pause 3.29
368 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.99
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.97
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.38
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 581.9
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.2
380 TestStartStop/group/newest-cni/serial/FirstStart 38.39
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
383 TestStartStop/group/newest-cni/serial/Stop 10.95
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.38
385 TestStartStop/group/newest-cni/serial/SecondStart 25.46
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
389 TestStartStop/group/newest-cni/serial/Pause 3.21
x
+
TestDownloadOnly/v1.16.0/json-events (25.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-150000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-150000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (25.867056439s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (25.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-150000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-150000: exit status 85 (303.236622ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-150000 | jenkins | v1.29.0 | 30 Mar 23 08:37 PDT |          |
	|         | -p download-only-150000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/30 08:37:22
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0330 08:37:22.818334   25450 out.go:296] Setting OutFile to fd 1 ...
	I0330 08:37:22.818523   25450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:37:22.818530   25450 out.go:309] Setting ErrFile to fd 2...
	I0330 08:37:22.818534   25450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:37:22.818647   25450 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	W0330 08:37:22.818750   25450 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16199-24978/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16199-24978/.minikube/config/config.json: no such file or directory
	I0330 08:37:22.820358   25450 out.go:303] Setting JSON to true
	I0330 08:37:22.840505   25450 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5810,"bootTime":1680184832,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 08:37:22.840664   25450 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 08:37:22.862602   25450 out.go:97] [download-only-150000] minikube v1.29.0 on Darwin 13.3
	I0330 08:37:22.862857   25450 notify.go:220] Checking for updates...
	W0330 08:37:22.862876   25450 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball: no such file or directory
	I0330 08:37:22.884429   25450 out.go:169] MINIKUBE_LOCATION=16199
	I0330 08:37:22.905594   25450 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 08:37:22.927588   25450 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 08:37:22.948622   25450 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 08:37:22.969559   25450 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	W0330 08:37:23.011447   25450 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0330 08:37:23.011833   25450 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 08:37:23.075445   25450 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 08:37:23.075545   25450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:37:23.261131   25450 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:37:23.126716053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:37:23.283171   25450 out.go:97] Using the docker driver based on user configuration
	I0330 08:37:23.283238   25450 start.go:295] selected driver: docker
	I0330 08:37:23.283253   25450 start.go:859] validating driver "docker" against <nil>
	I0330 08:37:23.283493   25450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:37:23.471698   25450 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:37:23.336862515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:37:23.471858   25450 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0330 08:37:23.474465   25450 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0330 08:37:23.474601   25450 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0330 08:37:23.497191   25450 out.go:169] Using Docker Desktop driver with root privileges
	I0330 08:37:23.518126   25450 cni.go:84] Creating CNI manager for ""
	I0330 08:37:23.518166   25450 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0330 08:37:23.518201   25450 start_flags.go:319] config:
	{Name:download-only-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-150000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 08:37:23.540009   25450 out.go:97] Starting control plane node download-only-150000 in cluster download-only-150000
	I0330 08:37:23.540093   25450 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 08:37:23.562042   25450 out.go:97] Pulling base image ...
	I0330 08:37:23.562151   25450 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 08:37:23.562180   25450 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 08:37:23.619800   25450 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0330 08:37:23.620052   25450 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory
	I0330 08:37:23.620169   25450 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0330 08:37:23.656915   25450 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0330 08:37:23.656948   25450 cache.go:57] Caching tarball of preloaded images
	I0330 08:37:23.657289   25450 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 08:37:23.678991   25450 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0330 08:37:23.679093   25450 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:37:23.879675   25450 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0330 08:37:44.403819   25450 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:37:44.404043   25450 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:37:44.942064   25450 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0330 08:37:44.942286   25450 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/download-only-150000/config.json ...
	I0330 08:37:44.942315   25450 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/download-only-150000/config.json: {Name:mk4294c83a816cc7147b16e53e286d91323115ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0330 08:37:44.942617   25450 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0330 08:37:44.942904   25450 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-150000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/json-events (19.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-150000 --force --alsologtostderr --kubernetes-version=v1.26.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-150000 --force --alsologtostderr --kubernetes-version=v1.26.3 --container-runtime=docker --driver=docker : (19.096921508s)
--- PASS: TestDownloadOnly/v1.26.3/json-events (19.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/preload-exists
--- PASS: TestDownloadOnly/v1.26.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/kubectl
--- PASS: TestDownloadOnly/v1.26.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-150000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-150000: exit status 85 (286.696195ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-150000 | jenkins | v1.29.0 | 30 Mar 23 08:37 PDT |          |
	|         | -p download-only-150000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-150000 | jenkins | v1.29.0 | 30 Mar 23 08:37 PDT |          |
	|         | -p download-only-150000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/30 08:37:48
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0330 08:37:48.994466   25501 out.go:296] Setting OutFile to fd 1 ...
	I0330 08:37:48.994648   25501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:37:48.994654   25501 out.go:309] Setting ErrFile to fd 2...
	I0330 08:37:48.994658   25501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:37:48.994772   25501 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	W0330 08:37:48.994866   25501 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16199-24978/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16199-24978/.minikube/config/config.json: no such file or directory
	I0330 08:37:48.996111   25501 out.go:303] Setting JSON to true
	I0330 08:37:49.016578   25501 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5837,"bootTime":1680184832,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 08:37:49.016670   25501 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 08:37:49.038065   25501 out.go:97] [download-only-150000] minikube v1.29.0 on Darwin 13.3
	I0330 08:37:49.038345   25501 notify.go:220] Checking for updates...
	I0330 08:37:49.059943   25501 out.go:169] MINIKUBE_LOCATION=16199
	I0330 08:37:49.081225   25501 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 08:37:49.103184   25501 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 08:37:49.125098   25501 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 08:37:49.146111   25501 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	W0330 08:37:49.189899   25501 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0330 08:37:49.190557   25501 config.go:182] Loaded profile config "download-only-150000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0330 08:37:49.190636   25501 start.go:767] api.Load failed for download-only-150000: filestore "download-only-150000": Docker machine "download-only-150000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0330 08:37:49.190726   25501 driver.go:365] Setting default libvirt URI to qemu:///system
	W0330 08:37:49.190760   25501 start.go:767] api.Load failed for download-only-150000: filestore "download-only-150000": Docker machine "download-only-150000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0330 08:37:49.254572   25501 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 08:37:49.254694   25501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:37:49.439816   25501 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:37:49.306836475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:37:49.461713   25501 out.go:97] Using the docker driver based on existing profile
	I0330 08:37:49.461822   25501 start.go:295] selected driver: docker
	I0330 08:37:49.461837   25501 start.go:859] validating driver "docker" against &{Name:download-only-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-150000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0330 08:37:49.462117   25501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:37:49.651860   25501 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:37:49.515181508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:37:49.654570   25501 cni.go:84] Creating CNI manager for ""
	I0330 08:37:49.654596   25501 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 08:37:49.654615   25501 start_flags.go:319] config:
	{Name:download-only-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:download-only-150000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 08:37:49.677575   25501 out.go:97] Starting control plane node download-only-150000 in cluster download-only-150000
	I0330 08:37:49.677681   25501 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 08:37:49.699106   25501 out.go:97] Pulling base image ...
	I0330 08:37:49.699229   25501 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 08:37:49.699300   25501 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 08:37:49.757570   25501 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0330 08:37:49.757724   25501 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory
	I0330 08:37:49.757746   25501 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory, skipping pull
	I0330 08:37:49.757752   25501 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in cache, skipping pull
	I0330 08:37:49.757760   25501 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 as a tarball
	I0330 08:37:49.789876   25501 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.3/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	I0330 08:37:49.789916   25501 cache.go:57] Caching tarball of preloaded images
	I0330 08:37:49.790257   25501 preload.go:132] Checking if preload exists for k8s version v1.26.3 and runtime docker
	I0330 08:37:49.812334   25501 out.go:97] Downloading Kubernetes v1.26.3 preload ...
	I0330 08:37:49.812426   25501 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:37:50.021031   25501 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.3/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4?checksum=md5:b698631b54adb014b111f0258a79e081 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-150000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/json-events (19.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-150000 --force --alsologtostderr --kubernetes-version=v1.27.0-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-150000 --force --alsologtostderr --kubernetes-version=v1.27.0-rc.0 --container-runtime=docker --driver=docker : (19.96267734s)
--- PASS: TestDownloadOnly/v1.27.0-rc.0/json-events (19.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.27.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.27.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-150000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-150000: exit status 85 (284.93946ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-150000 | jenkins | v1.29.0 | 30 Mar 23 08:37 PDT |          |
	|         | -p download-only-150000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-150000 | jenkins | v1.29.0 | 30 Mar 23 08:37 PDT |          |
	|         | -p download-only-150000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.3      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-150000 | jenkins | v1.29.0 | 30 Mar 23 08:38 PDT |          |
	|         | -p download-only-150000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/30 08:38:08
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.20.2 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0330 08:38:08.380414   25548 out.go:296] Setting OutFile to fd 1 ...
	I0330 08:38:08.380623   25548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:38:08.380628   25548 out.go:309] Setting ErrFile to fd 2...
	I0330 08:38:08.380651   25548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:38:08.380776   25548 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	W0330 08:38:08.380899   25548 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16199-24978/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16199-24978/.minikube/config/config.json: no such file or directory
	I0330 08:38:08.382160   25548 out.go:303] Setting JSON to true
	I0330 08:38:08.402317   25548 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5856,"bootTime":1680184832,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 08:38:08.402402   25548 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 08:38:08.423910   25548 out.go:97] [download-only-150000] minikube v1.29.0 on Darwin 13.3
	I0330 08:38:08.424156   25548 notify.go:220] Checking for updates...
	I0330 08:38:08.446947   25548 out.go:169] MINIKUBE_LOCATION=16199
	I0330 08:38:08.467950   25548 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 08:38:08.490096   25548 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 08:38:08.512112   25548 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 08:38:08.533896   25548 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	W0330 08:38:08.576950   25548 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0330 08:38:08.577624   25548 config.go:182] Loaded profile config "download-only-150000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	W0330 08:38:08.577713   25548 start.go:767] api.Load failed for download-only-150000: filestore "download-only-150000": Docker machine "download-only-150000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0330 08:38:08.577798   25548 driver.go:365] Setting default libvirt URI to qemu:///system
	W0330 08:38:08.577843   25548 start.go:767] api.Load failed for download-only-150000: filestore "download-only-150000": Docker machine "download-only-150000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0330 08:38:08.642786   25548 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 08:38:08.642896   25548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:38:08.827422   25548 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:38:08.694204723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:38:08.849349   25548 out.go:97] Using the docker driver based on existing profile
	I0330 08:38:08.849411   25548 start.go:295] selected driver: docker
	I0330 08:38:08.849422   25548 start.go:859] validating driver "docker" against &{Name:download-only-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:download-only-150000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0330 08:38:08.849743   25548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:38:09.036215   25548 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:47 SystemTime:2023-03-30 15:38:08.902107255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:38:09.038980   25548 cni.go:84] Creating CNI manager for ""
	I0330 08:38:09.039007   25548 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0330 08:38:09.039024   25548 start_flags.go:319] config:
	{Name:download-only-150000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.0-rc.0 ClusterName:download-only-150000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 08:38:09.060982   25548 out.go:97] Starting control plane node download-only-150000 in cluster download-only-150000
	I0330 08:38:09.061096   25548 cache.go:120] Beginning downloading kic base image for docker with docker
	I0330 08:38:09.082483   25548 out.go:97] Pulling base image ...
	I0330 08:38:09.082605   25548 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0330 08:38:09.082680   25548 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local docker daemon
	I0330 08:38:09.140164   25548 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 to local cache
	I0330 08:38:09.140308   25548 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory
	I0330 08:38:09.140331   25548 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 in local cache directory, skipping pull
	I0330 08:38:09.140337   25548 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 exists in cache, skipping pull
	I0330 08:38:09.140345   25548 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 as a tarball
	I0330 08:38:09.166905   25548 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.0-rc.0/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0330 08:38:09.166950   25548 cache.go:57] Caching tarball of preloaded images
	I0330 08:38:09.167316   25548 preload.go:132] Checking if preload exists for k8s version v1.27.0-rc.0 and runtime docker
	I0330 08:38:09.189712   25548 out.go:97] Downloading Kubernetes v1.27.0-rc.0 preload ...
	I0330 08:38:09.189812   25548 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:38:09.394422   25548 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.0-rc.0/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:6096a776168534014d2f50b9988b2d60 -> /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0330 08:38:23.140023   25548 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0330 08:38:23.140249   25548 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16199-24978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-150000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.0-rc.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.69s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-150000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.14s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-550000 --alsologtostderr --driver=docker 
aaa_download_only_test.go:226: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-550000 --alsologtostderr --driver=docker : (1.039392476s)
helpers_test.go:175: Cleaning up "download-docker-550000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-550000
--- PASS: TestDownloadOnlyKic (2.14s)

                                                
                                    
x
+
TestBinaryMirror (1.8s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-149000 --alsologtostderr --binary-mirror http://127.0.0.1:55055 --driver=docker 
aaa_download_only_test.go:308: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-149000 --alsologtostderr --binary-mirror http://127.0.0.1:55055 --driver=docker : (1.166378159s)
helpers_test.go:175: Cleaning up "binary-mirror-149000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-149000
--- PASS: TestBinaryMirror (1.80s)

                                                
                                    
x
+
TestOffline (47.06s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-619000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-619000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (44.144267228s)
helpers_test.go:175: Cleaning up "offline-docker-619000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-619000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-619000: (2.913361741s)
--- PASS: TestOffline (47.06s)

                                                
                                    
x
+
TestAddons/Setup (158.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-443000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-443000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m38.27811176s)
--- PASS: TestAddons/Setup (158.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: metrics-server stabilized in 2.253751ms
addons_test.go:384: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-6588d95b98-v2qgw" [a88987ec-afbe-499d-969e-ae354f0c7a9c] Running
addons_test.go:384: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007340726s
addons_test.go:390: (dbg) Run:  kubectl --context addons-443000 top pods -n kube-system
addons_test.go:407: (dbg) Run:  out/minikube-darwin-amd64 -p addons-443000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.98s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:431: tiller-deploy stabilized in 2.38839ms
addons_test.go:433: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-227lr" [103690ee-58a3-4bf9-847a-936a0b012967] Running
addons_test.go:433: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009697492s
addons_test.go:448: (dbg) Run:  kubectl --context addons-443000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:448: (dbg) Done: kubectl --context addons-443000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.442417861s)
addons_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 -p addons-443000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: csi-hostpath-driver pods stabilized in 4.882858ms
addons_test.go:539: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:549: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f2838791-a2f3-4b93-b978-df33d81d8c32] Pending
helpers_test.go:344: "task-pv-pod" [f2838791-a2f3-4b93-b978-df33d81d8c32] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f2838791-a2f3-4b93-b978-df33d81d8c32] Running
addons_test.go:554: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005922344s
addons_test.go:559: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-443000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-443000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-443000 delete pod task-pv-pod
addons_test.go:575: (dbg) Run:  kubectl --context addons-443000 delete pvc hpvc
addons_test.go:581: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-443000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:591: (dbg) Run:  kubectl --context addons-443000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:596: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d0b7f463-6278-4a59-96a6-9e68be921c0e] Pending
helpers_test.go:344: "task-pv-pod-restore" [d0b7f463-6278-4a59-96a6-9e68be921c0e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d0b7f463-6278-4a59-96a6-9e68be921c0e] Running
addons_test.go:596: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007430956s
addons_test.go:601: (dbg) Run:  kubectl --context addons-443000 delete pod task-pv-pod-restore
addons_test.go:605: (dbg) Run:  kubectl --context addons-443000 delete pvc hpvc-restore
addons_test.go:609: (dbg) Run:  kubectl --context addons-443000 delete volumesnapshot new-snapshot-demo
addons_test.go:613: (dbg) Run:  out/minikube-darwin-amd64 -p addons-443000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:613: (dbg) Done: out/minikube-darwin-amd64 -p addons-443000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.538881606s)
addons_test.go:617: (dbg) Run:  out/minikube-darwin-amd64 -p addons-443000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:799: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-443000 --alsologtostderr -v=1
addons_test.go:799: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-443000 --alsologtostderr -v=1: (1.303097279s)
addons_test.go:804: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58c48fc87f-b8h9s" [dd657829-ddba-444b-b3c5-fb32efdd3e0c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58c48fc87f-b8h9s" [dd657829-ddba-444b-b3c5-fb32efdd3e0c] Running
addons_test.go:804: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.008814027s
--- PASS: TestAddons/parallel/Headlamp (11.31s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:820: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5dd65ff88c-nncrs" [d0066d39-4ad7-4f36-96df-244b55b21530] Running
addons_test.go:820: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009188156s
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-443000
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:625: (dbg) Run:  kubectl --context addons-443000 create ns new-namespace
addons_test.go:639: (dbg) Run:  kubectl --context addons-443000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-443000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-443000: (10.980136197s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-443000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-443000
addons_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-443000
--- PASS: TestAddons/StoppedEnableDisable (11.54s)

                                                
                                    
x
+
TestCertOptions (29.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-023000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-023000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (25.405718228s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-023000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-023000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-023000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-023000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-023000: (2.714060871s)
--- PASS: TestCertOptions (29.07s)

                                                
                                    
x
+
TestCertExpiration (241.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-220000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-220000 --memory=2048 --cert-expiration=3m --driver=docker : (27.855521795s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-220000 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-220000 --memory=2048 --cert-expiration=8760h --driver=docker : (30.992435996s)
helpers_test.go:175: Cleaning up "cert-expiration-220000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-220000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-220000: (2.650004521s)
--- PASS: TestCertExpiration (241.50s)

                                                
                                    
x
+
TestDockerFlags (30.96s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-466000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-466000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (27.246917776s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-466000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-466000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-466000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-466000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-466000: (2.699281079s)
--- PASS: TestDockerFlags (30.96s)

                                                
                                    
x
+
TestForceSystemdFlag (30.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-952000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-952000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (27.071818952s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-952000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-952000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-952000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-952000: (2.959756557s)
--- PASS: TestForceSystemdFlag (30.60s)

                                                
                                    
x
+
TestForceSystemdEnv (32.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-271000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-271000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (29.105592385s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-271000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-271000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-271000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-271000: (3.160596283s)
--- PASS: TestForceSystemdEnv (32.81s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.8s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.80s)

                                                
                                    
x
+
TestErrorSpam/setup (25.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-731000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-731000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 --driver=docker : (25.768724557s)
--- PASS: TestErrorSpam/setup (25.77s)

                                                
                                    
x
+
TestErrorSpam/start (2.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 start --dry-run
--- PASS: TestErrorSpam/start (2.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 status
--- PASS: TestErrorSpam/status (1.25s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (11.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 stop: (10.912232623s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-731000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-731000 stop
--- PASS: TestErrorSpam/stop (11.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/16199-24978/.minikube/files/etc/test/nested/copy/25448/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-602000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2229: (dbg) Done: out/minikube-darwin-amd64 start -p functional-602000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (49.561806314s)
--- PASS: TestFunctional/serial/StartWithProxy (49.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-602000 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-darwin-amd64 start -p functional-602000 --alsologtostderr -v=8: (37.450496132s)
functional_test.go:658: soft start took 37.451125275s for "functional-602000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-602000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 cache add k8s.gcr.io/pause:3.1: (2.87667311s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 cache add k8s.gcr.io/pause:3.3: (2.827995036s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 cache add k8s.gcr.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 cache add k8s.gcr.io/pause:latest: (2.665207195s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-602000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3858545218/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 cache add minikube-local-cache-test:functional-602000
functional_test.go:1084: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 cache add minikube-local-cache-test:functional-602000: (1.166540851s)
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 cache delete minikube-local-cache-test:functional-602000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-602000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (403.403255ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 cache reload: (1.692360059s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 kubectl -- --context functional-602000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-602000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.70s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-602000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-darwin-amd64 start -p functional-602000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.716154051s)
functional_test.go:756: restart took 44.716320782s for "functional-602000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-602000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 logs
functional_test.go:1231: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 logs: (3.093293809s)
--- PASS: TestFunctional/serial/LogsCmd (3.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.06s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4099949585/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4099949585/001/logs.txt: (3.055600882s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 config get cpus: exit status 14 (42.662634ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 config get cpus: exit status 14 (63.680988ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-602000 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-602000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28029: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.79s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-602000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-602000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (1.034411568s)

                                                
                                                
-- stdout --
	* [functional-602000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 08:47:38.173572   27907 out.go:296] Setting OutFile to fd 1 ...
	I0330 08:47:38.173748   27907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:47:38.173755   27907 out.go:309] Setting ErrFile to fd 2...
	I0330 08:47:38.173760   27907 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:47:38.173895   27907 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 08:47:38.175579   27907 out.go:303] Setting JSON to false
	I0330 08:47:38.198485   27907 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6426,"bootTime":1680184832,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 08:47:38.198664   27907 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 08:47:38.220816   27907 out.go:177] * [functional-602000] minikube v1.29.0 on Darwin 13.3
	I0330 08:47:38.242556   27907 notify.go:220] Checking for updates...
	I0330 08:47:38.264364   27907 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 08:47:38.312111   27907 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 08:47:38.354069   27907 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 08:47:38.395850   27907 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 08:47:38.437917   27907 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 08:47:38.480895   27907 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 08:47:38.502363   27907 config.go:182] Loaded profile config "functional-602000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 08:47:38.502858   27907 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 08:47:38.645816   27907 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 08:47:38.646003   27907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:47:38.946667   27907 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-30 15:47:38.756979926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:47:39.006754   27907 out.go:177] * Using the docker driver based on existing profile
	I0330 08:47:39.048663   27907 start.go:295] selected driver: docker
	I0330 08:47:39.048676   27907 start.go:859] validating driver "docker" against &{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:functional-602000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 08:47:39.048776   27907 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 08:47:39.072684   27907 out.go:177] 
	W0330 08:47:39.093520   27907 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0330 08:47:39.114599   27907 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-602000 --dry-run --alsologtostderr -v=1 --driver=docker 
functional_test.go:986: (dbg) Done: out/minikube-darwin-amd64 start -p functional-602000 --dry-run --alsologtostderr -v=1 --driver=docker : (1.065680173s)
--- PASS: TestFunctional/parallel/DryRun (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-602000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-602000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (780.021766ms)

                                                
                                                
-- stdout --
	* [functional-602000] minikube v1.29.0 sur Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 08:47:40.259428   27985 out.go:296] Setting OutFile to fd 1 ...
	I0330 08:47:40.259607   27985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:47:40.259612   27985 out.go:309] Setting ErrFile to fd 2...
	I0330 08:47:40.259616   27985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 08:47:40.259748   27985 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 08:47:40.261407   27985 out.go:303] Setting JSON to false
	I0330 08:47:40.283348   27985 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6428,"bootTime":1680184832,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.3","kernelVersion":"22.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0330 08:47:40.283449   27985 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0330 08:47:40.305079   27985 out.go:177] * [functional-602000] minikube v1.29.0 sur Darwin 13.3
	I0330 08:47:40.326991   27985 notify.go:220] Checking for updates...
	I0330 08:47:40.349005   27985 out.go:177]   - MINIKUBE_LOCATION=16199
	I0330 08:47:40.369999   27985 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	I0330 08:47:40.411815   27985 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0330 08:47:40.453783   27985 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0330 08:47:40.497089   27985 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	I0330 08:47:40.538934   27985 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0330 08:47:40.560193   27985 config.go:182] Loaded profile config "functional-602000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 08:47:40.560541   27985 driver.go:365] Setting default libvirt URI to qemu:///system
	I0330 08:47:40.629913   27985 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
	I0330 08:47:40.630039   27985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0330 08:47:40.853597   27985 info.go:266] docker info: {ID:KGYP:ZLCS:CEMV:WBWD:NZSP:U3XB:GRL6:KWCU:PIJM:P62J:I6PQ:YUNG Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:55 SystemTime:2023-03-30 15:47:40.686088097 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0330 08:47:40.875495   27985 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0330 08:47:40.912425   27985 start.go:295] selected driver: docker
	I0330 08:47:40.912458   27985 start.go:859] validating driver "docker" against &{Name:functional-602000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.3 ClusterName:functional-602000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0330 08:47:40.912580   27985 start.go:870] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0330 08:47:40.936599   27985 out.go:177] 
	W0330 08:47:40.958915   27985 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0330 08:47:40.982745   27985 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 status
functional_test.go:855: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d9665f10-ac7e-40e1-810b-b1a215c3a420] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010724699s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-602000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-602000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-602000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-602000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f61fa35a-94dd-465f-acf6-f0798f29cbb8] Pending
helpers_test.go:344: "sp-pod" [f61fa35a-94dd-465f-acf6-f0798f29cbb8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f61fa35a-94dd-465f-acf6-f0798f29cbb8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008317166s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-602000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-602000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-602000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b1d59299-85f7-4f62-8cb0-b14dd4d32fbd] Pending
helpers_test.go:344: "sp-pod" [b1d59299-85f7-4f62-8cb0-b14dd4d32fbd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b1d59299-85f7-4f62-8cb0-b14dd4d32fbd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.006516683s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-602000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh -n functional-602000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 cp functional-602000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd3989214158/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh -n functional-602000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-602000 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-tjgpq" [afc76f4f-f6b5-4e99-863c-cc3c6d49f6d5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-tjgpq" [afc76f4f-f6b5-4e99-863c-cc3c6d49f6d5] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.015357044s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-602000 exec mysql-888f84dd9-tjgpq -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-602000 exec mysql-888f84dd9-tjgpq -- mysql -ppassword -e "show databases;": exit status 1 (166.924528ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-602000 exec mysql-888f84dd9-tjgpq -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-602000 exec mysql-888f84dd9-tjgpq -- mysql -ppassword -e "show databases;": exit status 1 (284.079081ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-602000 exec mysql-888f84dd9-tjgpq -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-602000 exec mysql-888f84dd9-tjgpq -- mysql -ppassword -e "show databases;": exit status 1 (226.12628ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-602000 exec mysql-888f84dd9-tjgpq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/25448/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo cat /etc/test/nested/copy/25448/hosts"
E0330 08:46:17.436785   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/25448.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/25448.pem"
E0330 08:46:14.876196   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/25448.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo cat /usr/share/ca-certificates/25448.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/254482.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/254482.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/254482.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo cat /usr/share/ca-certificates/254482.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-602000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
E0330 08:46:12.475022   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo systemctl is-active crio"
E0330 08:46:12.315277   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:46:12.321833   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:46:12.332285   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:46:12.353115   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:46:12.394330   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 ssh "sudo systemctl is-active crio": exit status 1 (577.784058ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 version -o=json --components
functional_test.go:2265: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 version -o=json --components: (1.057796873s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-602000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-602000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-602000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls --format table
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-602000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.26.3           | 92ed2bec97a63 | 65.6MB |
| gcr.io/google-containers/addon-resizer      | functional-602000 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-602000 | a3d68d2473d78 | 30B    |
| docker.io/library/nginx                     | alpine            | 8e75cbc5b25c8 | 41MB   |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/kube-scheduler              | v1.26.3           | 5a79047369329 | 56.4MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | latest            | 080ed0ed8312d | 142MB  |
| registry.k8s.io/kube-apiserver              | v1.26.3           | 1d9b3cbae03ce | 134MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.3           | ce8c2293ef09c | 123MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7               | 8aea3fb7309a3 | 455MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-602000 image ls --format json:
[{"id":"92ed2bec97a637010666d6c4aa4d69b672baec0fd5d236d142e4227a3a0557d8","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.3"],"size":"65599999"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"a3d68d2473d78852638137d3f6ba2af22e28a02aef1d3dd1dad0f8aac8fb07b6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-602000"],"size":"30"},{"id":"8e75cbc5b25c8438fcfe2e7c12c98409d5f161cbb668d6c444e02796691ada70","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"8aea3fb7309a304def7ce3018a44b4f732de4dece
a4fba7e7520ff703bc5135c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-602000"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"1d9b3cbae03cea2a1766cfa5bf06a5a9c7a7bdbc6f5322756e29ac78e76f2708","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.3"],"size":"134000000"},{"id":"5a79047369329dff4a02e705e650664d2019e583b802416447a6a17e9debb62d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.3"],"size":"56400000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.
k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ce8c2293ef09c9987773345638026f9f7aed16bc52e7a6ea507f0c655ab17161","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.3"],"size":"123000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"74200
0"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-602000 image ls --format yaml:
- id: 8e75cbc5b25c8438fcfe2e7c12c98409d5f161cbb668d6c444e02796691ada70
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41000000"
- id: 8aea3fb7309a304def7ce3018a44b4f732de4decea4fba7e7520ff703bc5135c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: 5a79047369329dff4a02e705e650664d2019e583b802416447a6a17e9debb62d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.3
size: "56400000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: a3d68d2473d78852638137d3f6ba2af22e28a02aef1d3dd1dad0f8aac8fb07b6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-602000
size: "30"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ce8c2293ef09c9987773345638026f9f7aed16bc52e7a6ea507f0c655ab17161
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.3
size: "123000000"
- id: 92ed2bec97a637010666d6c4aa4d69b672baec0fd5d236d142e4227a3a0557d8
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.3
size: "65599999"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-602000
size: "32900000"
- id: 1d9b3cbae03cea2a1766cfa5bf06a5a9c7a7bdbc6f5322756e29ac78e76f2708
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.3
size: "134000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 080ed0ed8312deca92e9a769b518cdfa20f5278359bd156f3469dd8fa532db6b
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh pgrep buildkitd
2023/03/30 08:47:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 ssh pgrep buildkitd: exit status 1 (422.691488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image build -t localhost/my-image:functional-602000 testdata/build
functional_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 image build -t localhost/my-image:functional-602000 testdata/build: (8.075940549s)
functional_test.go:318: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-602000 image build -t localhost/my-image:functional-602000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in d535ba2a888d
Removing intermediate container d535ba2a888d
---> c5ab56a5e61e
Step 3/3 : ADD content.txt /
---> 900a49792df0
Successfully built 900a49792df0
Successfully tagged localhost/my-image:functional-602000
functional_test.go:321: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-602000 image build -t localhost/my-image:functional-602000 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.76343761s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-602000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-602000 docker-env) && out/minikube-darwin-amd64 status -p functional-602000"
E0330 08:46:12.635237   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:46:12.955410   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 08:46:13.595742   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:494: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-602000 docker-env) && out/minikube-darwin-amd64 status -p functional-602000": (1.338820977s)
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-602000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:353: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000: (3.565302159s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000: (2.225268955s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0330 08:46:22.557354   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.683350846s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:243: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 image load --daemon gcr.io/google-containers/addon-resizer:functional-602000: (4.045349563s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image save gcr.io/google-containers/addon-resizer:functional-602000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:378: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 image save gcr.io/google-containers/addon-resizer:functional-602000 /Users/jenkins/workspace/addon-resizer-save.tar: (2.099343063s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image rm gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image load /Users/jenkins/workspace/addon-resizer-save.tar
E0330 08:46:32.797964   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:407: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.739401061s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 image save --daemon gcr.io/google-containers/addon-resizer:functional-602000
functional_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p functional-602000 image save --daemon gcr.io/google-containers/addon-resizer:functional-602000: (2.469086798s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-602000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-602000 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-602000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-bq77n" [32d0dd36-c177-4df8-b302-89a8eb4c04df] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-bq77n" [32d0dd36-c177-4df8-b302-89a8eb4c04df] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.008276746s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-602000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-602000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-602000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-602000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 27666: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-602000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-602000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [535c3e06-70db-41d5-98bb-7ff107be5f94] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [535c3e06-70db-41d5-98bb-7ff107be5f94] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.009011782s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 service list -o json
functional_test.go:1492: Took "624.366474ms" to run "out/minikube-darwin-amd64 -p functional-602000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 service --namespace=default --https --url hello-node
E0330 08:46:53.278977   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:1507: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 service --namespace=default --https --url hello-node: signal: killed (15.001483702s)

                                                
                                                
-- stdout --
	https://127.0.0.1:55836

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1520: found endpoint: https://127.0.0.1:55836
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-602000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-602000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 27700: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 service hello-node --url --format={{.IP}}
functional_test.go:1538: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 service hello-node --url --format={{.IP}}: signal: killed (15.002508899s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 service hello-node --url
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 service hello-node --url: signal: killed (15.003527542s)

                                                
                                                
-- stdout --
	http://127.0.0.1:55878

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1563: found endpoint for hello-node: http://127.0.0.1:55878
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1313: Took "419.647691ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1327: Took "64.796971ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1364: Took "424.961241ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1377: Took "75.69822ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-602000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port4066457290/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1680191257932930000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port4066457290/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1680191257932930000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port4066457290/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1680191257932930000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port4066457290/001/test-1680191257932930000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (716.085361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 30 15:47 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 30 15:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 30 15:47 test-1680191257932930000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh cat /mount-9p/test-1680191257932930000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-602000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fff8ee62-7077-41b6-af76-814a8067fa9a] Pending
helpers_test.go:344: "busybox-mount" [fff8ee62-7077-41b6-af76-814a8067fa9a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fff8ee62-7077-41b6-af76-814a8067fa9a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fff8ee62-7077-41b6-af76-814a8067fa9a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007738936s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-602000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-602000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port4066457290/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-602000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2625910854/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (535.886405ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-602000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2625910854/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-602000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-602000 ssh "sudo umount -f /mount-9p": exit status 1 (452.923203ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-602000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-602000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2625910854/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.76s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-602000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-602000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-602000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-582000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-582000: (2.288871529s)
--- PASS: TestImageBuild/serial/NormalBuild (2.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-582000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-582000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-582000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-781000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-781000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (40.50776045s)
--- PASS: TestJSONOutput/start/Command (40.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-781000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-781000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-781000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-781000 --output=json --user=testUser: (5.728346987s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-609000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-609000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (349.770001ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"46b3f08b-4a8e-4b93-81c5-61666bee6618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-609000] minikube v1.29.0 on Darwin 13.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"06b4f546-6803-4e3f-8183-ddbb30ffa13f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16199"}}
	{"specversion":"1.0","id":"77eeee61-f448-4179-8d72-b900d30c75f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig"}}
	{"specversion":"1.0","id":"582d8c6d-15a9-4e7d-8144-8ebf01e17e6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"273e0bc1-9462-47bd-93e8-a10333ca298e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5debc18c-17ab-479d-9f3b-3930a45e525b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube"}}
	{"specversion":"1.0","id":"b8be0d8a-cac3-440b-84e2-51ae84e1b18f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d3c31480-bc67-401e-8134-4dbc559c4c85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-609000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-609000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-813000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-813000 --network=: (25.091291288s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-813000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-813000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-813000: (2.631171389s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.78s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-488000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-488000 --network=bridge: (24.907454008s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-488000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-488000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-488000: (2.511756453s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.48s)

                                                
                                    
x
+
TestKicExistingNetwork (27.06s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-047000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-047000 --network=existing-network: (24.209524861s)
helpers_test.go:175: Cleaning up "existing-network-047000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-047000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-047000: (2.452292465s)
--- PASS: TestKicExistingNetwork (27.06s)

                                                
                                    
x
+
TestKicCustomSubnet (27.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-045000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-045000 --subnet=192.168.60.0/24: (24.548478945s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-045000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-045000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-045000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-045000: (2.639794068s)
--- PASS: TestKicCustomSubnet (27.25s)

                                                
                                    
x
+
TestKicStaticIP (27.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-017000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-017000 --static-ip=192.168.200.200: (24.305986744s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-017000 ip
helpers_test.go:175: Cleaning up "static-ip-017000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-017000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-017000: (2.631014648s)
--- PASS: TestKicStaticIP (27.17s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (57.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-509000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-509000 --driver=docker : (24.243927692s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-511000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-511000 --driver=docker : (25.922839369s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-509000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-511000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-511000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-511000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-511000: (2.68193703s)
helpers_test.go:175: Cleaning up "first-509000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-509000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-509000: (2.755055685s)
--- PASS: TestMinikubeProfile (57.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-090000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-090000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.379151164s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-090000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-104000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-104000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.621146551s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-104000 ssh -- ls /minikube-host
E0330 09:01:12.275262   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.17s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-090000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-090000 --alsologtostderr -v=5: (2.174344549s)
--- PASS: TestMountStart/serial/DeleteFirst (2.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-104000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-104000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-104000: (1.592363632s)
--- PASS: TestMountStart/serial/Stop (1.59s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-104000
E0330 09:01:17.956885   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-104000: (5.166832816s)
--- PASS: TestMountStart/serial/RestartStopped (6.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-104000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-950000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0330 09:02:35.322867   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-950000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m41.583052411s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (38.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-950000 -- rollout status deployment/busybox: (3.736598047s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-czqwz -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-vbgz5 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-czqwz -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-vbgz5 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-czqwz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-vbgz5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (38.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-czqwz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-czqwz -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-vbgz5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-950000 -- exec busybox-6b86dd6d48-vbgz5 -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-950000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-950000 -v 3 --alsologtostderr: (19.073063082s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr
multinode_test.go:116: (dbg) Done: out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr: (1.132768195s)
--- PASS: TestMultiNode/serial/AddNode (20.21s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp testdata/cp-test.txt multinode-950000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile2666263135/001/cp-test_multinode-950000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000:/home/docker/cp-test.txt multinode-950000-m02:/home/docker/cp-test_multinode-950000_multinode-950000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m02 "sudo cat /home/docker/cp-test_multinode-950000_multinode-950000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000:/home/docker/cp-test.txt multinode-950000-m03:/home/docker/cp-test_multinode-950000_multinode-950000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m03 "sudo cat /home/docker/cp-test_multinode-950000_multinode-950000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp testdata/cp-test.txt multinode-950000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile2666263135/001/cp-test_multinode-950000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000-m02:/home/docker/cp-test.txt multinode-950000:/home/docker/cp-test_multinode-950000-m02_multinode-950000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000 "sudo cat /home/docker/cp-test_multinode-950000-m02_multinode-950000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000-m02:/home/docker/cp-test.txt multinode-950000-m03:/home/docker/cp-test_multinode-950000-m02_multinode-950000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m03 "sudo cat /home/docker/cp-test_multinode-950000-m02_multinode-950000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp testdata/cp-test.txt multinode-950000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile2666263135/001/cp-test_multinode-950000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000-m03:/home/docker/cp-test.txt multinode-950000:/home/docker/cp-test_multinode-950000-m03_multinode-950000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000 "sudo cat /home/docker/cp-test_multinode-950000-m03_multinode-950000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 cp multinode-950000-m03:/home/docker/cp-test.txt multinode-950000-m02:/home/docker/cp-test_multinode-950000-m03_multinode-950000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 ssh -n multinode-950000-m02 "sudo cat /home/docker/cp-test_multinode-950000-m03_multinode-950000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-950000 node stop m03: (1.538885487s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-950000 status: exit status 7 (765.472479ms)

                                                
                                                
-- stdout --
	multinode-950000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-950000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-950000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr: exit status 7 (771.306306ms)

                                                
                                                
-- stdout --
	multinode-950000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-950000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-950000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 09:04:25.124400   31919 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:04:25.124627   31919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:04:25.124633   31919 out.go:309] Setting ErrFile to fd 2...
	I0330 09:04:25.124637   31919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:04:25.124764   31919 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:04:25.124949   31919 out.go:303] Setting JSON to false
	I0330 09:04:25.124972   31919 mustload.go:65] Loading cluster: multinode-950000
	I0330 09:04:25.125079   31919 notify.go:220] Checking for updates...
	I0330 09:04:25.125315   31919 config.go:182] Loaded profile config "multinode-950000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:04:25.125331   31919 status.go:255] checking status of multinode-950000 ...
	I0330 09:04:25.125819   31919 cli_runner.go:164] Run: docker container inspect multinode-950000 --format={{.State.Status}}
	I0330 09:04:25.187540   31919 status.go:330] multinode-950000 host status = "Running" (err=<nil>)
	I0330 09:04:25.187576   31919 host.go:66] Checking if "multinode-950000" exists ...
	I0330 09:04:25.187841   31919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-950000
	I0330 09:04:25.251562   31919 host.go:66] Checking if "multinode-950000" exists ...
	I0330 09:04:25.251832   31919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:04:25.251895   31919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-950000
	I0330 09:04:25.312694   31919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56381 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/multinode-950000/id_rsa Username:docker}
	I0330 09:04:25.399409   31919 ssh_runner.go:195] Run: systemctl --version
	I0330 09:04:25.404060   31919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:04:25.413605   31919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-950000
	I0330 09:04:25.475045   31919 kubeconfig.go:92] found "multinode-950000" server: "https://127.0.0.1:56380"
	I0330 09:04:25.475070   31919 api_server.go:165] Checking apiserver status ...
	I0330 09:04:25.475111   31919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0330 09:04:25.485619   31919 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1948/cgroup
	W0330 09:04:25.494240   31919 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1948/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0330 09:04:25.494294   31919 ssh_runner.go:195] Run: ls
	I0330 09:04:25.498217   31919 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56380/healthz ...
	I0330 09:04:25.503238   31919 api_server.go:278] https://127.0.0.1:56380/healthz returned 200:
	ok
	I0330 09:04:25.503252   31919 status.go:421] multinode-950000 apiserver status = Running (err=<nil>)
	I0330 09:04:25.503262   31919 status.go:257] multinode-950000 status: &{Name:multinode-950000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0330 09:04:25.503273   31919 status.go:255] checking status of multinode-950000-m02 ...
	I0330 09:04:25.503498   31919 cli_runner.go:164] Run: docker container inspect multinode-950000-m02 --format={{.State.Status}}
	I0330 09:04:25.565359   31919 status.go:330] multinode-950000-m02 host status = "Running" (err=<nil>)
	I0330 09:04:25.565379   31919 host.go:66] Checking if "multinode-950000-m02" exists ...
	I0330 09:04:25.565671   31919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-950000-m02
	I0330 09:04:25.626923   31919 host.go:66] Checking if "multinode-950000-m02" exists ...
	I0330 09:04:25.627194   31919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0330 09:04:25.627243   31919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-950000-m02
	I0330 09:04:25.693647   31919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56414 SSHKeyPath:/Users/jenkins/minikube-integration/16199-24978/.minikube/machines/multinode-950000-m02/id_rsa Username:docker}
	I0330 09:04:25.778195   31919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0330 09:04:25.787786   31919 status.go:257] multinode-950000-m02 status: &{Name:multinode-950000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0330 09:04:25.787807   31919 status.go:255] checking status of multinode-950000-m03 ...
	I0330 09:04:25.788077   31919 cli_runner.go:164] Run: docker container inspect multinode-950000-m03 --format={{.State.Status}}
	I0330 09:04:25.850490   31919 status.go:330] multinode-950000-m03 host status = "Stopped" (err=<nil>)
	I0330 09:04:25.850511   31919 status.go:343] host is not running, skipping remaining checks
	I0330 09:04:25.850520   31919 status.go:257] multinode-950000-m03 status: &{Name:multinode-950000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.08s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-950000 node start m03 --alsologtostderr: (9.411773365s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status
multinode_test.go:261: (dbg) Done: out/minikube-darwin-amd64 -p multinode-950000 status: (1.008892022s)
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-950000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-950000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-950000: (23.210329263s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-950000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-950000 --wait=true -v=8 --alsologtostderr: (1m5.460029654s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-950000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-950000 node delete m03: (5.331799708s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 stop
E0330 09:06:12.276889   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:06:17.961268   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-950000 stop: (21.650593477s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-950000 status: exit status 7 (165.041611ms)

                                                
                                                
-- stdout --
	multinode-950000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-950000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr: exit status 7 (162.073697ms)

                                                
                                                
-- stdout --
	multinode-950000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-950000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0330 09:06:33.256780   32475 out.go:296] Setting OutFile to fd 1 ...
	I0330 09:06:33.256975   32475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:06:33.256980   32475 out.go:309] Setting ErrFile to fd 2...
	I0330 09:06:33.256983   32475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0330 09:06:33.257103   32475 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16199-24978/.minikube/bin
	I0330 09:06:33.257307   32475 out.go:303] Setting JSON to false
	I0330 09:06:33.257345   32475 mustload.go:65] Loading cluster: multinode-950000
	I0330 09:06:33.257397   32475 notify.go:220] Checking for updates...
	I0330 09:06:33.257641   32475 config.go:182] Loaded profile config "multinode-950000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.3
	I0330 09:06:33.257658   32475 status.go:255] checking status of multinode-950000 ...
	I0330 09:06:33.258093   32475 cli_runner.go:164] Run: docker container inspect multinode-950000 --format={{.State.Status}}
	I0330 09:06:33.316695   32475 status.go:330] multinode-950000 host status = "Stopped" (err=<nil>)
	I0330 09:06:33.316712   32475 status.go:343] host is not running, skipping remaining checks
	I0330 09:06:33.316718   32475 status.go:257] multinode-950000 status: &{Name:multinode-950000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0330 09:06:33.316740   32475 status.go:255] checking status of multinode-950000-m02 ...
	I0330 09:06:33.317025   32475 cli_runner.go:164] Run: docker container inspect multinode-950000-m02 --format={{.State.Status}}
	I0330 09:06:33.375559   32475 status.go:330] multinode-950000-m02 host status = "Stopped" (err=<nil>)
	I0330 09:06:33.375584   32475 status.go:343] host is not running, skipping remaining checks
	I0330 09:06:33.375592   32475 status.go:257] multinode-950000-m02 status: &{Name:multinode-950000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-950000 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-950000 --wait=true -v=8 --alsologtostderr --driver=docker : (51.013971434s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-950000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (29.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-950000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-950000-m02 --driver=docker 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-950000-m02 --driver=docker : exit status 14 (405.034519ms)

                                                
                                                
-- stdout --
	* [multinode-950000-m02] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-950000-m02' is duplicated with machine name 'multinode-950000-m02' in profile 'multinode-950000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-950000-m03 --driver=docker 
E0330 09:07:41.015555   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-950000-m03 --driver=docker : (25.888234963s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-950000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-950000: exit status 80 (533.537207ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-950000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-950000-m03 already exists in multinode-950000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-950000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-950000-m03: (2.648703022s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (29.52s)

                                                
                                    
x
+
TestPreload (141.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-521000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-521000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m11.339447562s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-521000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-521000 -- docker pull gcr.io/k8s-minikube/busybox: (2.674359236s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-521000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-521000: (10.848378493s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-521000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-521000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (53.931869254s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-521000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-521000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-521000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-521000: (2.70464368s)
--- PASS: TestPreload (141.97s)

                                                
                                    
x
+
TestScheduledStopUnix (99.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-213000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-213000 --memory=2048 --driver=docker : (24.847980904s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-213000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-213000 -n scheduled-stop-213000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-213000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-213000 --cancel-scheduled
E0330 09:11:12.278734   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-213000 -n scheduled-stop-213000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-213000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-213000 --schedule 15s
E0330 09:11:17.962753   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-213000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-213000: exit status 7 (113.380624ms)

                                                
                                                
-- stdout --
	scheduled-stop-213000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-213000 -n scheduled-stop-213000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-213000 -n scheduled-stop-213000: exit status 7 (103.753964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-213000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-213000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-213000: (2.350011503s)
--- PASS: TestScheduledStopUnix (99.04s)

                                                
                                    
x
+
TestInsufficientStorage (14.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-921000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-921000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.410427087s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6ccd73a7-d761-4955-8c94-cc821882c3a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-921000] minikube v1.29.0 on Darwin 13.3","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0221e5f6-b56f-4b2a-860d-95f78917eaa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16199"}}
	{"specversion":"1.0","id":"9895823b-bce5-4d47-8cf4-768ce3feaae9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig"}}
	{"specversion":"1.0","id":"d28e8ada-25ec-49ba-89dd-6f4cacb54223","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"73902ee0-a62b-4619-bcd3-8fa26ad711aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"770b9a24-2fef-4cb3-afe6-05a9885070c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube"}}
	{"specversion":"1.0","id":"674adb05-c72b-478d-b798-7363e485ebc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4b7b4cd3-253f-425e-bc2b-ccb1b4e43d8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d18f30f5-133d-46ce-a705-c0c47ba10f96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1ab85465-fadb-45b3-a24b-354ec1254672","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2380657f-91fc-44ef-8016-0045e966e499","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"b5f29af2-74c6-48d2-81d3-814eaa641054","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-921000 in cluster insufficient-storage-921000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2530a54-e446-43ba-b589-dc279a9b46b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c3ee19e-d917-4719-b560-6760685a7d65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd9cd28e-e3a6-405a-8679-ed2006565272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-921000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-921000 --output=json --layout=cluster: exit status 7 (391.92419ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-921000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-921000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:12:56.047835   34249 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-921000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-921000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-921000 --output=json --layout=cluster: exit status 7 (390.944455ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-921000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-921000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0330 09:12:56.439442   34261 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-921000" does not appear in /Users/jenkins/minikube-integration/16199-24978/kubeconfig
	E0330 09:12:56.448514   34261 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/insufficient-storage-921000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-921000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-921000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-921000: (2.418024452s)
--- PASS: TestInsufficientStorage (14.61s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (16.67s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=16199
- KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current124347430/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current124347430/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current124347430/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current124347430/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (16.67s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (15.95s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=16199
- KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current425419408/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current425419408/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current425419408/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current425419408/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (15.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-773000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-773000: (3.659526464s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.66s)

                                                
                                    
x
+
TestPause/serial/Start (50.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-760000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-760000 --memory=2048 --install-addons=false --wait=all --driver=docker : (50.948823015s)
--- PASS: TestPause/serial/Start (50.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-760000 --alsologtostderr -v=1 --driver=docker 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-760000 --alsologtostderr -v=1 --driver=docker : (42.174303969s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.19s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-760000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-760000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-760000 --output=json --layout=cluster: exit status 2 (409.693757ms)

                                                
                                                
-- stdout --
	{"Name":"pause-760000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-760000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-760000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-760000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-760000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-760000 --alsologtostderr -v=5: (2.652925382s)
--- PASS: TestPause/serial/DeletePaused (2.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-760000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-760000: exit status 1 (57.50724ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-760000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-052000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-052000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (370.161318ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-052000] minikube v1.29.0 on Darwin 13.3
	  - MINIKUBE_LOCATION=16199
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16199-24978/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16199-24978/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-052000 --driver=docker 
E0330 09:21:12.284182   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:21:17.967858   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-052000 --driver=docker : (25.364659128s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-052000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-052000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-052000 --no-kubernetes --driver=docker : (6.175167207s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-052000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-052000 status -o json: exit status 2 (401.213203ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-052000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-052000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-052000: (2.454313157s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-052000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-052000 --no-kubernetes --driver=docker : (7.329991934s)
--- PASS: TestNoKubernetes/serial/Start (7.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-052000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-052000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (380.725489ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-052000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-052000: (1.60102339s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-052000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-052000 --driver=docker : (5.121346113s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-052000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-052000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (379.173323ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (44.131170068s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-x56qm" [33d3ea72-31c9-4eeb-8ed5-22dc66c89d69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-x56qm" [33d3ea72-31c9-4eeb-8ed5-22dc66c89d69] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.008729405s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (54.763651702s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m9.05818081s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qff6g" [fc360005-9f00-44cd-a46b-00d23d31e71a] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019178477s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2mt4s" [cc11c31c-5a84-4377-ae0f-41a21d192750] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0330 09:24:21.024909   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-2mt4s" [cc11c31c-5a84-4377-ae0f-41a21d192750] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00842626s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (57.335842371s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-ncwrn" [b83957ee-370f-4ab1-8b5b-00292e2bc7f2] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017909082s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-mrhfv" [f0d0f532-ea20-4403-ae65-0a1087624e4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-mrhfv" [f0d0f532-ea20-4403-ae65-0a1087624e4b] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.009802191s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-trx57" [732f96b2-1e91-49b2-b87b-a913d75dab8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-trx57" [732f96b2-1e91-49b2-b87b-a913d75dab8d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.020280085s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (42.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (42.018297007s)
--- PASS: TestNetworkPlugins/group/false/Start (42.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (43.107256448s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-fkpnn" [568ed1dd-b59f-4839-98e5-bf5c51cd0853] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-fkpnn" [568ed1dd-b59f-4839-98e5-bf5c51cd0853] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.008953995s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-cb6kq" [53995dbb-f985-43a4-a442-852a81af31a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-cb6kq" [53995dbb-f985-43a4-a442-852a81af31a3] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.006793561s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (55.886689825s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (44.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0330 09:28:03.424481   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (44.016685179s)
--- PASS: TestNetworkPlugins/group/bridge/Start (44.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-s2jmn" [c8d0d2d4-e6c0-46bc-b154-e7a11ae59507] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.013805336s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-n2rq9" [d39b62fa-2956-47f5-abd5-e6ff96cbbb2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0330 09:28:23.905383   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-n2rq9" [d39b62fa-2956-47f5-abd5-e6ff96cbbb2a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.010046657s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-nh9bl" [b682d220-2db0-4030-a8d7-bd968e577d33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-nh9bl" [b682d220-2db0-4030-a8d7-bd968e577d33] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.006401493s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0330 09:29:04.866010   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
E0330 09:29:13.433713   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:13.438807   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:13.449029   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:13.469665   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:13.511260   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:13.591356   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:13.751448   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:29:14.071808   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-378000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (45.742596185s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-378000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-378000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-jj84p" [b478416d-780c-4e82-9281-9787da463e75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0330 09:29:54.462748   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-jj84p" [b478416d-780c-4e82-9281-9787da463e75] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.009863235s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-378000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-378000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-578000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.0-rc.0
E0330 09:30:23.925103   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:30:26.787092   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
E0330 09:30:29.045711   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:30:35.425227   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:30:39.286016   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:30:53.100380   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:53.105942   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:53.116538   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:53.137449   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:53.177554   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:53.257796   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:53.418190   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:53.740411   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:54.382331   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:55.662585   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:58.223135   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:30:59.766394   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:31:03.343503   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:31:12.288579   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:31:13.585741   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:31:17.973120   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-578000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.0-rc.0: (1m6.634732899s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-578000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f417a7cf-a445-4577-bcc2-7a7a7ff1cc91] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0330 09:31:34.066235   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f417a7cf-a445-4577-bcc2-7a7a7ff1cc91] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.015439714s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-578000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-578000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-578000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-578000 --alsologtostderr -v=3
E0330 09:31:40.727292   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:31:45.913662   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:45.919445   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:45.929688   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:45.950039   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:45.991499   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:46.072603   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:46.232856   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:46.553702   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:47.194241   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:48.474506   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:51.035984   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-578000 --alsologtostderr -v=3: (11.013800092s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-578000 -n no-preload-578000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-578000 -n no-preload-578000: exit status 7 (104.102318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-578000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (304.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-578000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.0-rc.0
E0330 09:31:56.156702   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:31:57.346636   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:32:06.396975   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:32:15.028019   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:32:15.900336   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:15.906060   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:15.916754   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:15.937574   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:15.978267   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:16.058474   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:16.219935   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:16.540294   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:17.182065   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:18.462644   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:21.023255   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:26.143764   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:26.877562   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:32:36.384156   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:32:42.943118   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
E0330 09:32:56.862584   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:33:02.641933   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:33:07.828016   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:33:10.616964   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
E0330 09:33:18.196788   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:18.203229   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:18.213640   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:18.235846   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:18.276473   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:18.357552   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:18.518003   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:18.838072   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:19.479180   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:20.759267   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:23.319336   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:33:28.439626   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-578000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.0-rc.0: (5m4.007214716s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-578000 -n no-preload-578000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (304.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-331000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-331000 --alsologtostderr -v=3: (1.629337197s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-331000 -n old-k8s-version-331000: exit status 7 (112.258492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-331000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-t86zw" [7406498a-87b8-4299-9853-71af0355c7ed] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-t86zw" [7406498a-87b8-4299-9853-71af0355c7ed] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.015640757s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-t86zw" [7406498a-87b8-4299-9853-71af0355c7ed] Running
E0330 09:37:13.583358   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:37:15.882134   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009357759s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-578000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-578000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-578000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-578000 -n no-preload-578000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-578000 -n no-preload-578000: exit status 2 (411.366395ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-578000 -n no-preload-578000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-578000 -n no-preload-578000: exit status 2 (427.766327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-578000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-578000 -n no-preload-578000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-578000 -n no-preload-578000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.3
E0330 09:37:32.127358   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:37:42.924905   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
E0330 09:37:43.567456   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.3: (50.795858492s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-995000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4111baad-0767-47f0-ac19-43bd12035502] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0330 09:38:18.193061   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4111baad-0767-47f0-ac19-43bd12035502] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.015019352s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-995000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-995000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-995000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-995000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-995000 --alsologtostderr -v=3: (10.944836851s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000: exit status 7 (103.889405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-995000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (313.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.3
E0330 09:38:38.065705   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:38:45.901216   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
E0330 09:39:05.752471   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/bridge-378000/client.crt: no such file or directory
E0330 09:39:13.416696   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
E0330 09:39:48.282392   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:40:15.967880   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kubenet-378000/client.crt: no such file or directory
E0330 09:40:18.788342   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
E0330 09:40:53.082015   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/custom-flannel-378000/client.crt: no such file or directory
E0330 09:41:01.010924   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 09:41:12.271273   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
E0330 09:41:17.954787   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/functional-602000/client.crt: no such file or directory
E0330 09:41:30.619173   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:30.625562   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:30.636103   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:30.657535   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:30.697841   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:30.779064   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:30.939591   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:31.259801   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:31.900455   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:33.182716   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:35.743556   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:40.864020   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:41:45.897201   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/false-378000/client.crt: no such file or directory
E0330 09:41:51.104585   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:42:11.585254   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:42:15.884034   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/enable-default-cni-378000/client.crt: no such file or directory
E0330 09:42:42.924448   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/auto-378000/client.crt: no such file or directory
E0330 09:42:52.545839   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
E0330 09:43:18.193398   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/flannel-378000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-995000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.3: (5m12.585973735s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-995000 -n embed-certs-995000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (313.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-gfl9s" [610bdd84-4173-458a-ad3e-a99793c8047a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-gfl9s" [610bdd84-4173-458a-ad3e-a99793c8047a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.020876943s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-gfl9s" [610bdd84-4173-458a-ad3e-a99793c8047a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009134641s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-995000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-995000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-995000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-995000 -n embed-certs-995000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-995000 -n embed-certs-995000: exit status 2 (413.591959ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-995000 -n embed-certs-995000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-995000 -n embed-certs-995000: exit status 2 (415.262802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-995000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-995000 -n embed-certs-995000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-995000 -n embed-certs-995000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-582000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.3
E0330 09:44:14.467345   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/no-preload-578000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-582000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.3: (42.993212654s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-582000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [87d48d8c-f597-45ae-b0bf-85845c6b6a5f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [87d48d8c-f597-45ae-b0bf-85845c6b6a5f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.015311996s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-582000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-582000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-582000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-582000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-582000 --alsologtostderr -v=3: (10.96514469s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000: exit status 7 (106.714808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-582000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (581.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-582000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-582000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.3: (9m41.475931857s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (581.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wp76j" [0c5e2635-c9b2-4975-ac6e-edc0a3d7ec98] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013099322s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-wp76j" [0c5e2635-c9b2-4975-ac6e-edc0a3d7ec98] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007780269s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-582000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-582000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-582000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000: exit status 2 (416.192265ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000: exit status 2 (417.20045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-582000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-582000 -n default-k8s-diff-port-582000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-996000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.0-rc.0
E0330 09:55:18.898995   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/calico-378000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-996000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.0-rc.0: (38.393069906s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-996000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-996000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-996000 --alsologtostderr -v=3: (10.948363152s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-996000 -n newest-cni-996000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-996000 -n newest-cni-996000: exit status 7 (112.741789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-996000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-996000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-996000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.0-rc.0: (25.037656142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-996000 -n newest-cni-996000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-996000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-996000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-996000 -n newest-cni-996000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-996000 -n newest-cni-996000: exit status 2 (421.1115ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-996000 -n newest-cni-996000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-996000 -n newest-cni-996000: exit status 2 (417.802453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-996000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-996000 -n newest-cni-996000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-996000 -n newest-cni-996000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.21s)

                                                
                                    

Test skip (20/318)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.0-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:305: registry stabilized in 9.920974ms
addons_test.go:307: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-4gmgn" [bfd65e63-19ee-408f-8264-d7f5f18e4163] Running
addons_test.go:307: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01060454s
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-txd6z" [737040b4-fea6-4a8f-b249-53263551f2ab] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.030144011s
addons_test.go:315: (dbg) Run:  kubectl --context addons-443000 delete po -l run=registry-test --now
addons_test.go:320: (dbg) Run:  kubectl --context addons-443000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:320: (dbg) Done: kubectl --context addons-443000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.834011262s)
addons_test.go:330: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:182: (dbg) Run:  kubectl --context addons-443000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Run:  kubectl --context addons-443000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:220: (dbg) Run:  kubectl --context addons-443000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:225: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [50ffd0f3-380b-4641-9387-84e4eaef4425] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [50ffd0f3-380b-4641-9387-84e4eaef4425] Running
addons_test.go:225: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.009084054s
addons_test.go:237: (dbg) Run:  out/minikube-darwin-amd64 -p addons-443000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:257: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.88s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-602000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-602000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-nzhjr" [c3caa1d5-b924-4536-a100-7636a12582ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-nzhjr" [c3caa1d5-b924-4536-a100-7636a12582ae] Running
E0330 08:47:34.241949   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/addons-443000/client.crt: no such file or directory
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.011619982s
functional_test.go:1644: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-378000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-378000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-378000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-378000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378000"

                                                
                                                
----------------------- debugLogs end: cilium-378000 [took: 5.510884322s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-378000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-378000
--- SKIP: TestNetworkPlugins/group/cilium (6.01s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-908000
E0330 09:44:13.417131   25448 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16199-24978/.minikube/profiles/kindnet-378000/client.crt: no such file or directory
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
Copied to clipboard