Test Report: Docker_macOS 15565

                    
                      1a22b9432724c1a7c0bfc1f92a18db163006c245:2023-01-27:27621
                    
                

Test fail (14/306)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (254.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-054000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0127 19:41:09.155859    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:43:25.303768    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:43:44.655060    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.660542    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.672694    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.694890    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.737011    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.817794    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:44.979003    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:45.301221    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:45.941458    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:47.222138    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:49.782363    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:43:52.996726    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:43:54.902471    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:44:05.143485    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:44:25.625087    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 19:45:06.585189    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-054000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.834888872s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-054000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-054000 in cluster ingress-addon-legacy-054000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.22 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:40:53.682015    7492 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:40:53.682163    7492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:40:53.682168    7492 out.go:309] Setting ErrFile to fd 2...
	I0127 19:40:53.682172    7492 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:40:53.682288    7492 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 19:40:53.682861    7492 out.go:303] Setting JSON to false
	I0127 19:40:53.701307    7492 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2427,"bootTime":1674874826,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 19:40:53.701384    7492 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:40:53.723540    7492 out.go:177] * [ingress-addon-legacy-054000] minikube v1.28.0 on Darwin 13.2
	I0127 19:40:53.765870    7492 notify.go:220] Checking for updates...
	I0127 19:40:53.787197    7492 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 19:40:53.829853    7492 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 19:40:53.851315    7492 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:40:53.873331    7492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:40:53.895135    7492 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 19:40:53.917183    7492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 19:40:53.939568    7492 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 19:40:53.999883    7492 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 19:40:54.000099    7492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 19:40:54.146815    7492 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-28 03:40:54.051291725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 19:40:54.168156    7492 out.go:177] * Using the docker driver based on user configuration
	I0127 19:40:54.189195    7492 start.go:296] selected driver: docker
	I0127 19:40:54.189217    7492 start.go:840] validating driver "docker" against <nil>
	I0127 19:40:54.189255    7492 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 19:40:54.193185    7492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 19:40:54.335924    7492 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:51 SystemTime:2023-01-28 03:40:54.244118236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 19:40:54.336048    7492 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0127 19:40:54.336197    7492 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 19:40:54.358120    7492 out.go:177] * Using Docker Desktop driver with root privileges
	I0127 19:40:54.379724    7492 cni.go:84] Creating CNI manager for ""
	I0127 19:40:54.379759    7492 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 19:40:54.379778    7492 start_flags.go:319] config:
	{Name:ingress-addon-legacy-054000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-054000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:40:54.423850    7492 out.go:177] * Starting control plane node ingress-addon-legacy-054000 in cluster ingress-addon-legacy-054000
	I0127 19:40:54.445442    7492 cache.go:120] Beginning downloading kic base image for docker with docker
	I0127 19:40:54.466863    7492 out.go:177] * Pulling base image ...
	I0127 19:40:54.509709    7492 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0127 19:40:54.509713    7492 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0127 19:40:54.560844    7492 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0127 19:40:54.560870    7492 cache.go:57] Caching tarball of preloaded images
	I0127 19:40:54.561061    7492 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0127 19:40:54.582631    7492 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0127 19:40:54.625585    7492 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:40:54.614212    7492 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0127 19:40:54.625675    7492 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0127 19:40:54.711666    7492 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0127 19:40:57.219202    7492 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:40:57.219382    7492 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:40:57.841651    7492 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0127 19:40:57.841929    7492 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/config.json ...
	I0127 19:40:57.841954    7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/config.json: {Name:mk7e9386f9c8348577381a5d689e80c6463f62a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:40:57.842262    7492 cache.go:193] Successfully downloaded all kic artifacts
	I0127 19:40:57.842288    7492 start.go:364] acquiring machines lock for ingress-addon-legacy-054000: {Name:mk028f3a902092b125e4b1d22762f6d6b2eef6d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 19:40:57.842416    7492 start.go:368] acquired machines lock for "ingress-addon-legacy-054000" in 120.893µs
	I0127 19:40:57.842438    7492 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-054000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-054000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 19:40:57.842519    7492 start.go:125] createHost starting for "" (driver="docker")
	I0127 19:40:57.868898    7492 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0127 19:40:57.869150    7492 start.go:159] libmachine.API.Create for "ingress-addon-legacy-054000" (driver="docker")
	I0127 19:40:57.869200    7492 client.go:168] LocalClient.Create starting
	I0127 19:40:57.869314    7492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem
	I0127 19:40:57.869358    7492 main.go:141] libmachine: Decoding PEM data...
	I0127 19:40:57.869376    7492 main.go:141] libmachine: Parsing certificate...
	I0127 19:40:57.869445    7492 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem
	I0127 19:40:57.869495    7492 main.go:141] libmachine: Decoding PEM data...
	I0127 19:40:57.869503    7492 main.go:141] libmachine: Parsing certificate...
	I0127 19:40:57.890362    7492 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-054000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 19:40:57.950566    7492 cli_runner.go:211] docker network inspect ingress-addon-legacy-054000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 19:40:57.950688    7492 network_create.go:281] running [docker network inspect ingress-addon-legacy-054000] to gather additional debugging logs...
	I0127 19:40:57.950708    7492 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-054000
	W0127 19:40:58.005807    7492 cli_runner.go:211] docker network inspect ingress-addon-legacy-054000 returned with exit code 1
	I0127 19:40:58.005838    7492 network_create.go:284] error running [docker network inspect ingress-addon-legacy-054000]: docker network inspect ingress-addon-legacy-054000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-054000
	I0127 19:40:58.005857    7492 network_create.go:286] output of [docker network inspect ingress-addon-legacy-054000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-054000
	
	** /stderr **
	I0127 19:40:58.005957    7492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 19:40:58.063088    7492 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005aafe0}
	I0127 19:40:58.063123    7492 network_create.go:123] attempt to create docker network ingress-addon-legacy-054000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0127 19:40:58.063198    7492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-054000 ingress-addon-legacy-054000
	I0127 19:40:58.149765    7492 network_create.go:107] docker network ingress-addon-legacy-054000 192.168.49.0/24 created
	I0127 19:40:58.149803    7492 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-054000" container
	I0127 19:40:58.149917    7492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 19:40:58.204354    7492 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-054000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-054000 --label created_by.minikube.sigs.k8s.io=true
	I0127 19:40:58.259501    7492 oci.go:103] Successfully created a docker volume ingress-addon-legacy-054000
	I0127 19:40:58.259626    7492 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-054000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-054000 --entrypoint /usr/bin/test -v ingress-addon-legacy-054000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0127 19:40:58.746342    7492 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-054000
	I0127 19:40:58.746385    7492 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0127 19:40:58.746402    7492 kic.go:190] Starting extracting preloaded images to volume ...
	I0127 19:40:58.746518    7492 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-054000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 19:41:04.838894    7492 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-054000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (6.092341935s)
	I0127 19:41:04.838925    7492 kic.go:199] duration metric: took 6.092576 seconds to extract preloaded images to volume
	I0127 19:41:04.839063    7492 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 19:41:04.988435    7492 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-054000 --name ingress-addon-legacy-054000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-054000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-054000 --network ingress-addon-legacy-054000 --ip 192.168.49.2 --volume ingress-addon-legacy-054000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0127 19:41:05.347876    7492 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Running}}
	I0127 19:41:05.409529    7492 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Status}}
	I0127 19:41:05.471687    7492 cli_runner.go:164] Run: docker exec ingress-addon-legacy-054000 stat /var/lib/dpkg/alternatives/iptables
	I0127 19:41:05.582308    7492 oci.go:144] the created container "ingress-addon-legacy-054000" has a running status.
	I0127 19:41:05.582348    7492 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa...
	I0127 19:41:05.731509    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0127 19:41:05.731622    7492 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 19:41:05.834432    7492 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Status}}
	I0127 19:41:05.895065    7492 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 19:41:05.895084    7492 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-054000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 19:41:06.002676    7492 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Status}}
	I0127 19:41:06.059952    7492 machine.go:88] provisioning docker machine ...
	I0127 19:41:06.059994    7492 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-054000"
	I0127 19:41:06.060119    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:06.118646    7492 main.go:141] libmachine: Using SSH client type: native
	I0127 19:41:06.118858    7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50680 <nil> <nil>}
	I0127 19:41:06.118875    7492 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-054000 && echo "ingress-addon-legacy-054000" | sudo tee /etc/hostname
	I0127 19:41:06.263158    7492 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-054000
	
	I0127 19:41:06.263254    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:06.323355    7492 main.go:141] libmachine: Using SSH client type: native
	I0127 19:41:06.323524    7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50680 <nil> <nil>}
	I0127 19:41:06.323543    7492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-054000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-054000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-054000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 19:41:06.457662    7492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 19:41:06.457683    7492 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3092/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3092/.minikube}
	I0127 19:41:06.457699    7492 ubuntu.go:177] setting up certificates
	I0127 19:41:06.457707    7492 provision.go:83] configureAuth start
	I0127 19:41:06.457785    7492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-054000
	I0127 19:41:06.515960    7492 provision.go:138] copyHostCerts
	I0127 19:41:06.516009    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem
	I0127 19:41:06.516089    7492 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem, removing ...
	I0127 19:41:06.516094    7492 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem
	I0127 19:41:06.516215    7492 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem (1679 bytes)
	I0127 19:41:06.516390    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem
	I0127 19:41:06.516421    7492 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem, removing ...
	I0127 19:41:06.516426    7492 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem
	I0127 19:41:06.516489    7492 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem (1078 bytes)
	I0127 19:41:06.516631    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem
	I0127 19:41:06.516669    7492 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem, removing ...
	I0127 19:41:06.516674    7492 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem
	I0127 19:41:06.516746    7492 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem (1123 bytes)
	I0127 19:41:06.516884    7492 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-054000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-054000]
	I0127 19:41:06.558574    7492 provision.go:172] copyRemoteCerts
	I0127 19:41:06.558631    7492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 19:41:06.558684    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:06.617025    7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
	I0127 19:41:06.714169    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0127 19:41:06.714257    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 19:41:06.731902    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0127 19:41:06.731993    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0127 19:41:06.749929    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0127 19:41:06.750006    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 19:41:06.767491    7492 provision.go:86] duration metric: configureAuth took 309.775211ms
	I0127 19:41:06.767505    7492 ubuntu.go:193] setting minikube options for container-runtime
	I0127 19:41:06.767655    7492 config.go:180] Loaded profile config "ingress-addon-legacy-054000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0127 19:41:06.767718    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:06.827430    7492 main.go:141] libmachine: Using SSH client type: native
	I0127 19:41:06.827595    7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50680 <nil> <nil>}
	I0127 19:41:06.827612    7492 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 19:41:06.962752    7492 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0127 19:41:06.962766    7492 ubuntu.go:71] root file system type: overlay
	I0127 19:41:06.962951    7492 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 19:41:06.963036    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:07.021819    7492 main.go:141] libmachine: Using SSH client type: native
	I0127 19:41:07.021984    7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50680 <nil> <nil>}
	I0127 19:41:07.022038    7492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 19:41:07.166728    7492 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 19:41:07.166834    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:07.224853    7492 main.go:141] libmachine: Using SSH client type: native
	I0127 19:41:07.225021    7492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 50680 <nil> <nil>}
	I0127 19:41:07.225034    7492 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 19:41:07.844740    7492 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 03:41:07.163682701 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0127 19:41:07.844764    7492 machine.go:91] provisioned docker machine in 1.784805821s
	I0127 19:41:07.844770    7492 client.go:171] LocalClient.Create took 9.9756485s
	I0127 19:41:07.844785    7492 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-054000" took 9.975719843s
	I0127 19:41:07.844795    7492 start.go:300] post-start starting for "ingress-addon-legacy-054000" (driver="docker")
	I0127 19:41:07.844800    7492 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 19:41:07.844937    7492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 19:41:07.845046    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:07.904712    7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
	I0127 19:41:08.000545    7492 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 19:41:08.004273    7492 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 19:41:08.004298    7492 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 19:41:08.004309    7492 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 19:41:08.004322    7492 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0127 19:41:08.004331    7492 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/addons for local assets ...
	I0127 19:41:08.004461    7492 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/files for local assets ...
	I0127 19:41:08.004664    7492 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> 44062.pem in /etc/ssl/certs
	I0127 19:41:08.004672    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> /etc/ssl/certs/44062.pem
	I0127 19:41:08.004868    7492 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 19:41:08.012169    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /etc/ssl/certs/44062.pem (1708 bytes)
	I0127 19:41:08.029868    7492 start.go:303] post-start completed in 185.066357ms
	I0127 19:41:08.030502    7492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-054000
	I0127 19:41:08.089479    7492 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/config.json ...
	I0127 19:41:08.089926    7492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 19:41:08.090001    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:08.149122    7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
	I0127 19:41:08.240025    7492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 19:41:08.244765    7492 start.go:128] duration metric: createHost completed in 10.402322988s
	I0127 19:41:08.244792    7492 start.go:83] releasing machines lock for "ingress-addon-legacy-054000", held for 10.402452729s
	I0127 19:41:08.244915    7492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-054000
	I0127 19:41:08.302874    7492 ssh_runner.go:195] Run: cat /version.json
	I0127 19:41:08.302888    7492 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0127 19:41:08.302941    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:08.302948    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:08.364995    7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
	I0127 19:41:08.365132    7492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
	I0127 19:41:08.663419    7492 ssh_runner.go:195] Run: systemctl --version
	I0127 19:41:08.667991    7492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 19:41:08.672923    7492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 19:41:08.693745    7492 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 19:41:08.712132    7492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0127 19:41:08.729154    7492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0127 19:41:08.736955    7492 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 19:41:08.736969    7492 start.go:472] detecting cgroup driver to use...
	I0127 19:41:08.736982    7492 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 19:41:08.737080    7492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 19:41:08.750212    7492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0127 19:41:08.758795    7492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 19:41:08.767806    7492 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 19:41:08.767935    7492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 19:41:08.776518    7492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 19:41:08.784958    7492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 19:41:08.793359    7492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 19:41:08.802063    7492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 19:41:08.810120    7492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 19:41:08.819024    7492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 19:41:08.826535    7492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 19:41:08.834053    7492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 19:41:08.901405    7492 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 19:41:08.977173    7492 start.go:472] detecting cgroup driver to use...
	I0127 19:41:08.977192    7492 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 19:41:08.977293    7492 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 19:41:08.988865    7492 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0127 19:41:08.988937    7492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 19:41:08.999690    7492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 19:41:09.013764    7492 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 19:41:09.105328    7492 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 19:41:09.207729    7492 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 19:41:09.207746    7492 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 19:41:09.221311    7492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 19:41:09.316742    7492 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 19:41:09.528142    7492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 19:41:09.558937    7492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 19:41:09.635563    7492 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.22 ...
	I0127 19:41:09.635791    7492 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-054000 dig +short host.docker.internal
	I0127 19:41:09.750290    7492 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0127 19:41:09.750406    7492 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0127 19:41:09.754775    7492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 19:41:09.764980    7492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:41:09.826262    7492 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0127 19:41:09.826339    7492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 19:41:09.851861    7492 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0127 19:41:09.851879    7492 docker.go:560] Images already preloaded, skipping extraction
	I0127 19:41:09.851951    7492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 19:41:09.876876    7492 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0127 19:41:09.876892    7492 cache_images.go:84] Images are preloaded, skipping loading
	I0127 19:41:09.876985    7492 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 19:41:09.949323    7492 cni.go:84] Creating CNI manager for ""
	I0127 19:41:09.949344    7492 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 19:41:09.949368    7492 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0127 19:41:09.949386    7492 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-054000 NodeName:ingress-addon-legacy-054000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0127 19:41:09.949517    7492 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-054000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 19:41:09.949615    7492 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-054000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-054000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 19:41:09.949678    7492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0127 19:41:09.957765    7492 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 19:41:09.957826    7492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 19:41:09.965385    7492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0127 19:41:09.978477    7492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0127 19:41:09.991478    7492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0127 19:41:10.004953    7492 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0127 19:41:10.008913    7492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 19:41:10.018778    7492 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000 for IP: 192.168.49.2
	I0127 19:41:10.018795    7492 certs.go:186] acquiring lock for shared ca certs: {Name:mk2d86ad31f10478b3fe72eedd54ef2fcd74cf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:41:10.018972    7492 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key
	I0127 19:41:10.019048    7492 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key
	I0127 19:41:10.019090    7492 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.key
	I0127 19:41:10.019104    7492 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.crt with IP's: []
	I0127 19:41:10.092741    7492 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.crt ...
	I0127 19:41:10.092754    7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.crt: {Name:mk5948de52246f31ea9dca617aa13d451663230d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:41:10.093064    7492 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.key ...
	I0127 19:41:10.093072    7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/client.key: {Name:mke380590e0a75d844fa50d6e66145fd00a430fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:41:10.093277    7492 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key.dd3b5fb2
	I0127 19:41:10.093292    7492 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0127 19:41:10.368101    7492 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt.dd3b5fb2 ...
	I0127 19:41:10.368116    7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt.dd3b5fb2: {Name:mk6d23a3062e04b7b3cd2f8f1bee1444b4d77482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:41:10.368418    7492 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key.dd3b5fb2 ...
	I0127 19:41:10.368427    7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key.dd3b5fb2: {Name:mk64d8aacf9651324704a8002ebfd1ac8712a26c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:41:10.368626    7492 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt
	I0127 19:41:10.368804    7492 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key
	I0127 19:41:10.368982    7492 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key
	I0127 19:41:10.369002    7492 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt with IP's: []
	I0127 19:41:10.716056    7492 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt ...
	I0127 19:41:10.716070    7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt: {Name:mkc6726d4fc8887e9eb49f726a06b0037ac71b2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:41:10.716345    7492 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key ...
	I0127 19:41:10.716352    7492 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key: {Name:mk2e0342ced452cfe62fa48c2ac5e81968858620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:41:10.716530    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 19:41:10.716560    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 19:41:10.716581    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 19:41:10.716604    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 19:41:10.716623    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 19:41:10.716645    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0127 19:41:10.716668    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 19:41:10.716688    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 19:41:10.716776    7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem (1338 bytes)
	W0127 19:41:10.716834    7492 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406_empty.pem, impossibly tiny 0 bytes
	I0127 19:41:10.716846    7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 19:41:10.716878    7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem (1078 bytes)
	I0127 19:41:10.716919    7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem (1123 bytes)
	I0127 19:41:10.716954    7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem (1679 bytes)
	I0127 19:41:10.717031    7492 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem (1708 bytes)
	I0127 19:41:10.717060    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 19:41:10.717117    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem -> /usr/share/ca-certificates/4406.pem
	I0127 19:41:10.717143    7492 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> /usr/share/ca-certificates/44062.pem
	I0127 19:41:10.717675    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 19:41:10.737043    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 19:41:10.754701    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 19:41:10.772213    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/ingress-addon-legacy-054000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 19:41:10.789744    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 19:41:10.807103    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 19:41:10.824981    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 19:41:10.842568    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 19:41:10.860393    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 19:41:10.878358    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem --> /usr/share/ca-certificates/4406.pem (1338 bytes)
	I0127 19:41:10.896157    7492 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /usr/share/ca-certificates/44062.pem (1708 bytes)
	I0127 19:41:10.913683    7492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 19:41:10.926724    7492 ssh_runner.go:195] Run: openssl version
	I0127 19:41:10.932400    7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 19:41:10.941167    7492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 19:41:10.945530    7492 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:31 /usr/share/ca-certificates/minikubeCA.pem
	I0127 19:41:10.945574    7492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 19:41:10.951028    7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 19:41:10.959467    7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4406.pem && ln -fs /usr/share/ca-certificates/4406.pem /etc/ssl/certs/4406.pem"
	I0127 19:41:10.967890    7492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4406.pem
	I0127 19:41:10.971906    7492 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:36 /usr/share/ca-certificates/4406.pem
	I0127 19:41:10.971956    7492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4406.pem
	I0127 19:41:10.977362    7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4406.pem /etc/ssl/certs/51391683.0"
	I0127 19:41:10.985577    7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44062.pem && ln -fs /usr/share/ca-certificates/44062.pem /etc/ssl/certs/44062.pem"
	I0127 19:41:10.993954    7492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44062.pem
	I0127 19:41:10.998188    7492 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:36 /usr/share/ca-certificates/44062.pem
	I0127 19:41:10.998238    7492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44062.pem
	I0127 19:41:11.003874    7492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44062.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 19:41:11.012168    7492 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-054000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-054000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:41:11.012326    7492 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 19:41:11.035833    7492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 19:41:11.044067    7492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 19:41:11.051807    7492 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0127 19:41:11.051881    7492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 19:41:11.059359    7492 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 19:41:11.059384    7492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 19:41:11.108390    7492 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0127 19:41:11.108431    7492 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 19:41:11.412018    7492 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 19:41:11.412147    7492 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 19:41:11.412281    7492 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 19:41:11.638183    7492 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 19:41:11.638725    7492 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 19:41:11.638784    7492 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0127 19:41:11.712792    7492 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 19:41:11.756024    7492 out.go:204]   - Generating certificates and keys ...
	I0127 19:41:11.756169    7492 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 19:41:11.756268    7492 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 19:41:12.068637    7492 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 19:41:12.446081    7492 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0127 19:41:12.752873    7492 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0127 19:41:12.821444    7492 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0127 19:41:12.936662    7492 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0127 19:41:12.936798    7492 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0127 19:41:13.037964    7492 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0127 19:41:13.038152    7492 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0127 19:41:13.206406    7492 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 19:41:13.406723    7492 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 19:41:13.689580    7492 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0127 19:41:13.705664    7492 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 19:41:13.801781    7492 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 19:41:13.914710    7492 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 19:41:14.048146    7492 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 19:41:14.118969    7492 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 19:41:14.119519    7492 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 19:41:14.141118    7492 out.go:204]   - Booting up control plane ...
	I0127 19:41:14.141269    7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 19:41:14.141368    7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 19:41:14.141474    7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 19:41:14.141564    7492 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 19:41:14.141748    7492 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 19:41:54.128042    7492 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0127 19:41:54.129019    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:41:54.129198    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:41:59.129939    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:41:59.130152    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:42:09.131888    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:42:09.132052    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:42:29.133507    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:42:29.133762    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:43:09.134531    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:43:09.134731    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:43:09.134743    7492 kubeadm.go:322] 
	I0127 19:43:09.134780    7492 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0127 19:43:09.134858    7492 kubeadm.go:322] 		timed out waiting for the condition
	I0127 19:43:09.134869    7492 kubeadm.go:322] 
	I0127 19:43:09.134904    7492 kubeadm.go:322] 	This error is likely caused by:
	I0127 19:43:09.134933    7492 kubeadm.go:322] 		- The kubelet is not running
	I0127 19:43:09.135070    7492 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 19:43:09.135085    7492 kubeadm.go:322] 
	I0127 19:43:09.135181    7492 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 19:43:09.135241    7492 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0127 19:43:09.135277    7492 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0127 19:43:09.135284    7492 kubeadm.go:322] 
	I0127 19:43:09.135372    7492 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 19:43:09.135465    7492 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 19:43:09.135478    7492 kubeadm.go:322] 
	I0127 19:43:09.135561    7492 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0127 19:43:09.135639    7492 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0127 19:43:09.135732    7492 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0127 19:43:09.135819    7492 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0127 19:43:09.135832    7492 kubeadm.go:322] 
	I0127 19:43:09.138817    7492 kubeadm.go:322] W0128 03:41:11.107740    1169 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0127 19:43:09.138968    7492 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0127 19:43:09.139043    7492 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0127 19:43:09.139161    7492 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
	I0127 19:43:09.139267    7492 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 19:43:09.139370    7492 kubeadm.go:322] W0128 03:41:14.123584    1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0127 19:43:09.139471    7492 kubeadm.go:322] W0128 03:41:14.125037    1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0127 19:43:09.139542    7492 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 19:43:09.139601    7492 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0127 19:43:09.139811    7492 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 03:41:11.107740    1169 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 03:41:14.123584    1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 03:41:14.125037    1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-054000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 03:41:11.107740    1169 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 03:41:14.123584    1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 03:41:14.125037    1169 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 19:43:09.139866    7492 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0127 19:43:09.554550    7492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 19:43:09.564622    7492 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0127 19:43:09.564680    7492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 19:43:09.572049    7492 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 19:43:09.572080    7492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 19:43:09.620065    7492 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0127 19:43:09.620120    7492 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 19:43:09.910530    7492 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 19:43:09.910627    7492 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 19:43:09.910721    7492 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 19:43:10.132063    7492 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 19:43:10.132992    7492 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 19:43:10.133041    7492 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0127 19:43:10.201986    7492 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 19:43:10.223677    7492 out.go:204]   - Generating certificates and keys ...
	I0127 19:43:10.223761    7492 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 19:43:10.223830    7492 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 19:43:10.223939    7492 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 19:43:10.224032    7492 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0127 19:43:10.224087    7492 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 19:43:10.224140    7492 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0127 19:43:10.224254    7492 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0127 19:43:10.224316    7492 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0127 19:43:10.224371    7492 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 19:43:10.224435    7492 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 19:43:10.224485    7492 kubeadm.go:322] [certs] Using the existing "sa" key
	I0127 19:43:10.224538    7492 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 19:43:10.411376    7492 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 19:43:10.576138    7492 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 19:43:10.719313    7492 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 19:43:10.897559    7492 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 19:43:10.898227    7492 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 19:43:10.919749    7492 out.go:204]   - Booting up control plane ...
	I0127 19:43:10.919971    7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 19:43:10.920172    7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 19:43:10.920296    7492 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 19:43:10.920458    7492 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 19:43:10.920747    7492 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 19:43:50.907841    7492 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0127 19:43:50.908421    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:43:50.908667    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:43:55.909384    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:43:55.909602    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:44:05.910280    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:44:05.910438    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:44:25.911844    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:44:25.912056    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:45:05.913015    7492 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 19:45:05.913240    7492 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 19:45:05.913253    7492 kubeadm.go:322] 
	I0127 19:45:05.913291    7492 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0127 19:45:05.913334    7492 kubeadm.go:322] 		timed out waiting for the condition
	I0127 19:45:05.913343    7492 kubeadm.go:322] 
	I0127 19:45:05.913412    7492 kubeadm.go:322] 	This error is likely caused by:
	I0127 19:45:05.913457    7492 kubeadm.go:322] 		- The kubelet is not running
	I0127 19:45:05.913565    7492 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 19:45:05.913585    7492 kubeadm.go:322] 
	I0127 19:45:05.913725    7492 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 19:45:05.913770    7492 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0127 19:45:05.913820    7492 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0127 19:45:05.913830    7492 kubeadm.go:322] 
	I0127 19:45:05.913964    7492 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 19:45:05.914073    7492 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0127 19:45:05.914089    7492 kubeadm.go:322] 
	I0127 19:45:05.914199    7492 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0127 19:45:05.914311    7492 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0127 19:45:05.914375    7492 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0127 19:45:05.914411    7492 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0127 19:45:05.914419    7492 kubeadm.go:322] 
	I0127 19:45:05.917325    7492 kubeadm.go:322] W0128 03:43:09.619357    3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0127 19:45:05.917474    7492 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0127 19:45:05.917530    7492 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0127 19:45:05.917639    7492 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
	I0127 19:45:05.917733    7492 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 19:45:05.917825    7492 kubeadm.go:322] W0128 03:43:10.902448    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0127 19:45:05.917922    7492 kubeadm.go:322] W0128 03:43:10.904092    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0127 19:45:05.917989    7492 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 19:45:05.918053    7492 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0127 19:45:05.918089    7492 kubeadm.go:403] StartCluster complete in 3m54.907924679s
	I0127 19:45:05.918183    7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 19:45:05.941054    7492 logs.go:279] 0 containers: []
	W0127 19:45:05.941068    7492 logs.go:281] No container was found matching "kube-apiserver"
	I0127 19:45:05.941137    7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 19:45:05.964528    7492 logs.go:279] 0 containers: []
	W0127 19:45:05.964542    7492 logs.go:281] No container was found matching "etcd"
	I0127 19:45:05.964612    7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 19:45:05.987525    7492 logs.go:279] 0 containers: []
	W0127 19:45:05.987540    7492 logs.go:281] No container was found matching "coredns"
	I0127 19:45:05.987612    7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 19:45:06.009743    7492 logs.go:279] 0 containers: []
	W0127 19:45:06.009756    7492 logs.go:281] No container was found matching "kube-scheduler"
	I0127 19:45:06.009833    7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 19:45:06.031949    7492 logs.go:279] 0 containers: []
	W0127 19:45:06.031963    7492 logs.go:281] No container was found matching "kube-proxy"
	I0127 19:45:06.032033    7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 19:45:06.054314    7492 logs.go:279] 0 containers: []
	W0127 19:45:06.054327    7492 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 19:45:06.054402    7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 19:45:06.077463    7492 logs.go:279] 0 containers: []
	W0127 19:45:06.077478    7492 logs.go:281] No container was found matching "storage-provisioner"
	I0127 19:45:06.077545    7492 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 19:45:06.099288    7492 logs.go:279] 0 containers: []
	W0127 19:45:06.099304    7492 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 19:45:06.099317    7492 logs.go:124] Gathering logs for kubelet ...
	I0127 19:45:06.099328    7492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 19:45:06.137292    7492 logs.go:124] Gathering logs for dmesg ...
	I0127 19:45:06.137306    7492 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 19:45:06.151028    7492 logs.go:124] Gathering logs for describe nodes ...
	I0127 19:45:06.151045    7492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 19:45:06.208343    7492 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 19:45:06.208354    7492 logs.go:124] Gathering logs for Docker ...
	I0127 19:45:06.208361    7492 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 19:45:06.225388    7492 logs.go:124] Gathering logs for container status ...
	I0127 19:45:06.225400    7492 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 19:45:08.274626    7492 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049230709s)
	W0127 19:45:08.274749    7492 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 03:43:09.619357    3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 03:43:10.902448    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 03:43:10.904092    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 19:45:08.274768    7492 out.go:239] * 
	* 
	W0127 19:45:08.274899    7492 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 03:43:09.619357    3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 03:43:10.902448    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 03:43:10.904092    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 03:43:09.619357    3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 03:43:10.902448    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 03:43:10.904092    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 19:45:08.274913    7492 out.go:239] * 
	* 
	W0127 19:45:08.275622    7492 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 19:45:08.338161    7492 out.go:177] 
	W0127 19:45:08.380493    7492 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 03:43:09.619357    3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 03:43:10.902448    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 03:43:10.904092    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 03:43:09.619357    3690 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 03:43:10.902448    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 03:43:10.904092    3690 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 19:45:08.380663    7492 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 19:45:08.380754    7492 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 19:45:08.402086    7492 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-054000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.87s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-054000 addons enable ingress --alsologtostderr -v=5
E0127 19:46:28.506854    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-054000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.129305073s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:45:08.551796    7877 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:45:08.552041    7877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:45:08.552047    7877 out.go:309] Setting ErrFile to fd 2...
	I0127 19:45:08.552051    7877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:45:08.552160    7877 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 19:45:08.573991    7877 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0127 19:45:08.595023    7877 config.go:180] Loaded profile config "ingress-addon-legacy-054000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0127 19:45:08.595043    7877 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-054000"
	I0127 19:45:08.595050    7877 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-054000"
	I0127 19:45:08.595343    7877 host.go:66] Checking if "ingress-addon-legacy-054000" exists ...
	I0127 19:45:08.595870    7877 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Status}}
	I0127 19:45:08.674746    7877 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0127 19:45:08.696314    7877 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0127 19:45:08.717760    7877 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0127 19:45:08.738735    7877 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0127 19:45:08.759865    7877 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0127 19:45:08.759885    7877 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0127 19:45:08.760004    7877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:45:08.816718    7877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
	I0127 19:45:08.916374    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:08.967125    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:08.967147    7877 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:09.244374    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:09.296158    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:09.296177    7877 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:09.837721    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:09.891447    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:09.891473    7877 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:10.547941    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:10.601593    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:10.601608    7877 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:11.393009    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:11.445384    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:11.445404    7877 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:12.615816    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:12.668413    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:12.668429    7877 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:14.923101    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:14.975842    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:14.975859    7877 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:16.587713    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:16.642590    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:16.642605    7877 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:19.447124    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:19.499407    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:19.499427    7877 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:23.326590    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:23.381360    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:23.381375    7877 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:31.080134    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:31.132777    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:31.132793    7877 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:45.768703    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:45:45.822782    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:45:45.822800    7877 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:14.231468    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:46:14.284955    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:14.284971    7877 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:37.453950    7877 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0127 19:46:37.507498    7877 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:37.507526    7877 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-054000"
	I0127 19:46:37.529105    7877 out.go:177] * Verifying ingress addon...
	I0127 19:46:37.551364    7877 out.go:177] 
	W0127 19:46:37.574343    7877 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-054000" does not exist: client config: context "ingress-addon-legacy-054000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-054000" does not exist: client config: context "ingress-addon-legacy-054000" does not exist]
	W0127 19:46:37.574381    7877 out.go:239] * 
	* 
	W0127 19:46:37.578224    7877 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 19:46:37.598968    7877 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-054000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-054000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac",
	        "Created": "2023-01-28T03:41:05.04450317Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49510,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T03:41:05.337261558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/hosts",
	        "LogPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac-json.log",
	        "Name": "/ingress-addon-legacy-054000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-054000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-054000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-054000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-054000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-054000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-054000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-054000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0814760e479a1d33cd8c88c143af6b8b298c2196a9be8c1b42828a2e2a1cbda2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50680"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50681"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50682"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50683"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50684"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0814760e479a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-054000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "04bd820cff09",
	                        "ingress-addon-legacy-054000"
	                    ],
	                    "NetworkID": "76f74674bc9686104e96c1ca3bce3bc264b433426883d59dd86930684d6b32d7",
	                    "EndpointID": "deb45cf4af11160b6589392a5e31563453df6a540814d158455beb9f0b2d9e48",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-054000 -n ingress-addon-legacy-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-054000 -n ingress-addon-legacy-054000: exit status 6 (411.357278ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 19:46:38.083944    7979 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-054000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-054000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-054000 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-054000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.069158636s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:46:38.151473    7989 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:46:38.151728    7989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:46:38.151733    7989 out.go:309] Setting ErrFile to fd 2...
	I0127 19:46:38.151738    7989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:46:38.151851    7989 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 19:46:38.173903    7989 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0127 19:46:38.196128    7989 config.go:180] Loaded profile config "ingress-addon-legacy-054000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0127 19:46:38.196165    7989 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-054000"
	I0127 19:46:38.196176    7989 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-054000"
	I0127 19:46:38.196724    7989 host.go:66] Checking if "ingress-addon-legacy-054000" exists ...
	I0127 19:46:38.197672    7989 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-054000 --format={{.State.Status}}
	I0127 19:46:38.276093    7989 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0127 19:46:38.298066    7989 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0127 19:46:38.319801    7989 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0127 19:46:38.319833    7989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0127 19:46:38.319988    7989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-054000
	I0127 19:46:38.377105    7989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50680 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/ingress-addon-legacy-054000/id_rsa Username:docker}
	I0127 19:46:38.474581    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:38.524933    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:38.524957    7989 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:38.803405    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:38.858095    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:38.858115    7989 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:39.400566    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:39.455335    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:39.455355    7989 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:40.112350    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:40.168046    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:40.168061    7989 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:40.960204    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:41.013813    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:41.013828    7989 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:42.184514    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:42.236953    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:42.236970    7989 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:44.491946    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:44.546990    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:44.547007    7989 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:46.157863    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:46.210679    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:46.210695    7989 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:49.015639    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:49.068985    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:49.068999    7989 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:52.894203    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:46:52.948231    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:46:52.948254    7989 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:47:00.646545    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:47:00.700088    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:47:00.700104    7989 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:47:15.336438    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:47:15.390457    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:47:15.390471    7989 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:47:43.797321    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:47:43.848927    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:47:43.848949    7989 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:48:07.018019    7989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0127 19:48:07.071439    7989 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0127 19:48:07.093253    7989 out.go:177] 
	W0127 19:48:07.115257    7989 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0127 19:48:07.115286    7989 out.go:239] * 
	* 
	W0127 19:48:07.119014    7989 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 19:48:07.140285    7989 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-054000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-054000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac",
	        "Created": "2023-01-28T03:41:05.04450317Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49510,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T03:41:05.337261558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/hosts",
	        "LogPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac-json.log",
	        "Name": "/ingress-addon-legacy-054000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-054000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-054000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-054000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-054000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-054000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-054000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-054000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0814760e479a1d33cd8c88c143af6b8b298c2196a9be8c1b42828a2e2a1cbda2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50680"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50681"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50682"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50683"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50684"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0814760e479a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-054000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "04bd820cff09",
	                        "ingress-addon-legacy-054000"
	                    ],
	                    "NetworkID": "76f74674bc9686104e96c1ca3bce3bc264b433426883d59dd86930684d6b32d7",
	                    "EndpointID": "deb45cf4af11160b6589392a5e31563453df6a540814d158455beb9f0b2d9e48",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-054000 -n ingress-addon-legacy-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-054000 -n ingress-addon-legacy-054000: exit status 6 (401.223531ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 19:48:07.614896    8081 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-054000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-054000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-054000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-054000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac",
	        "Created": "2023-01-28T03:41:05.04450317Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 49510,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T03:41:05.337261558Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/hosts",
	        "LogPath": "/var/lib/docker/containers/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac/04bd820cff09048775de28f7a3322b6419d1ef9e3a9dba0f3fea75efc86e80ac-json.log",
	        "Name": "/ingress-addon-legacy-054000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-054000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-054000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2a7ca75cef008e9cf36558cf64fcff5fce342b8598919c802e268be29842ecd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-054000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-054000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-054000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-054000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-054000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0814760e479a1d33cd8c88c143af6b8b298c2196a9be8c1b42828a2e2a1cbda2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50680"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50681"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50682"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50683"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50684"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0814760e479a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-054000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "04bd820cff09",
	                        "ingress-addon-legacy-054000"
	                    ],
	                    "NetworkID": "76f74674bc9686104e96c1ca3bce3bc264b433426883d59dd86930684d6b32d7",
	                    "EndpointID": "deb45cf4af11160b6589392a5e31563453df6a540814d158455beb9f0b2d9e48",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-054000 -n ingress-addon-legacy-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-054000 -n ingress-addon-legacy-054000: exit status 6 (405.770293ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 19:48:08.080414    8093 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-054000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-054000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3252844424.exe start -p running-upgrade-498000 --memory=2200 --vm-driver=docker 
E0127 20:08:25.311215    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3252844424.exe start -p running-upgrade-498000 --memory=2200 --vm-driver=docker : exit status 70 (50.877810176s)

                                                
                                                
-- stdout --
	* [running-upgrade-498000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig4179763877
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:08:07.365293706 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-498000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:08:26.988115107 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-498000", then "minikube start -p running-upgrade-498000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.02 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 92.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 255.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 460.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 503.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:08:26.988115107 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3252844424.exe start -p running-upgrade-498000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3252844424.exe start -p running-upgrade-498000 --memory=2200 --vm-driver=docker : exit status 70 (4.410376819s)

                                                
                                                
-- stdout --
	* [running-upgrade-498000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig3016594725
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-498000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3252844424.exe start -p running-upgrade-498000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3252844424.exe start -p running-upgrade-498000 --memory=2200 --vm-driver=docker : exit status 70 (4.676046339s)

                                                
                                                
-- stdout --
	* [running-upgrade-498000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig424170314
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-498000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-01-27 20:08:40.399206 -0800 PST m=+2309.403559055
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-498000
helpers_test.go:235: (dbg) docker inspect running-upgrade-498000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92ad603dc5dd255abd9800d6d2c650e1abaddb3efae6a23ba31254447c590a6e",
	        "Created": "2023-01-28T04:08:15.571224416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 174340,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:08:15.836852414Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/92ad603dc5dd255abd9800d6d2c650e1abaddb3efae6a23ba31254447c590a6e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92ad603dc5dd255abd9800d6d2c650e1abaddb3efae6a23ba31254447c590a6e/hostname",
	        "HostsPath": "/var/lib/docker/containers/92ad603dc5dd255abd9800d6d2c650e1abaddb3efae6a23ba31254447c590a6e/hosts",
	        "LogPath": "/var/lib/docker/containers/92ad603dc5dd255abd9800d6d2c650e1abaddb3efae6a23ba31254447c590a6e/92ad603dc5dd255abd9800d6d2c650e1abaddb3efae6a23ba31254447c590a6e-json.log",
	        "Name": "/running-upgrade-498000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-498000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4fa230cbaf7431cf76ef22708e29b072878017644990c78b6e6fef89cd509197-init/diff:/var/lib/docker/overlay2/575040c1cd6fa9b064a258db2eb02fcbe8cdb3384cdb47f19b4234e5ba9b4a97/diff:/var/lib/docker/overlay2/1e8fb5f1d948945df3ddce71e758d3da8d8118858f2e5a08df9464ee9ebb2037/diff:/var/lib/docker/overlay2/c544960136dbc5d092fae5313d527643651c6e2a65704463efaa4358ccae1331/diff:/var/lib/docker/overlay2/43f564c70597689a3ea632103df9b1a253ffd27aa3437620374f1177a296a1eb/diff:/var/lib/docker/overlay2/d16fde0b6bdeaf4261faee7fc9e42341173eb434cc833954ddb2277f468c37f0/diff:/var/lib/docker/overlay2/8285e91d760eaef85d2fb3c28000c3f0709f50513ebe89cf374288f97135c044/diff:/var/lib/docker/overlay2/1e968842ba0ce46f4ff6359b3e5a21c70757c393eaad21d62f2266ca03ecf309/diff:/var/lib/docker/overlay2/dc8d9c03061beef86986bd597b0fd68f381f214529929dbef2fa75e7ae981eab/diff:/var/lib/docker/overlay2/75498eeada407023a5fd32c0335558b546de1882e522b699aad1f475cc23d360/diff:/var/lib/docker/overlay2/30fe2e
0418914eba58711b96964efe6c7b51f633464f31a15cc86cb6d66dc918/diff:/var/lib/docker/overlay2/41574202d4243f42c771c64dec875284f984561185dd87461ded79e989fe0012/diff:/var/lib/docker/overlay2/2486a32b89da283f9ae514f00dfa4f50bb6300e2f959c3637d982fdf023db0e4/diff:/var/lib/docker/overlay2/c573ab199116f10bd11a3f57b93275ba9b230f9c5f1ce297dbbf8a9644a2784d/diff:/var/lib/docker/overlay2/c3d71f26de8fc41a26f47958ab3b388a7367f8d0e96e143836e58029c9b3afae/diff:/var/lib/docker/overlay2/8462333bc4a29ccf2ca4426977034439f352217402c29f15fecad093927e849c/diff:/var/lib/docker/overlay2/922a17c47d339ea250e98f5fcf695096b4a16e48818603d8905123bd77cedb56/diff:/var/lib/docker/overlay2/dfacd1805d008155c4ad90ccfc042aa2ec49c7407b078f228b157fbcb3a0469c/diff:/var/lib/docker/overlay2/bc33364f21f93e8d8589c294e5b7e688a319087be4d62cdfa8f6c73ea9101544/diff:/var/lib/docker/overlay2/633cfb70aa09484c4007a73f11539673a8cbd06a79b085d7d6a728e6d393aa2b/diff:/var/lib/docker/overlay2/03134537a7343ec3b51c98c4ea881891568edb58af0f5710b2fa8786f4840bc2/diff:/var/lib/d
ocker/overlay2/76c005bad483f262fdc488a083cf470dfcbc09f18c10bd5d71b64207a9e8bb13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4fa230cbaf7431cf76ef22708e29b072878017644990c78b6e6fef89cd509197/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4fa230cbaf7431cf76ef22708e29b072878017644990c78b6e6fef89cd509197/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4fa230cbaf7431cf76ef22708e29b072878017644990c78b6e6fef89cd509197/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-498000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-498000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-498000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-498000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-498000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3584f26a641b07090c554aa7f9d709433b773f55868b376603dfe2ecf7ff6c86",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52803"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52804"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3584f26a641b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "d1cf9c98634c0f953a3835f8cc56e51ffe3298d96b3278368186672b8394fb88",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "db2808adae70f1543c0b2142988ae45ef7eeb96c9849cf9eae7df9ab6bb57a0e",
	                    "EndpointID": "d1cf9c98634c0f953a3835f8cc56e51ffe3298d96b3278368186672b8394fb88",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-498000 -n running-upgrade-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-498000 -n running-upgrade-498000: exit status 6 (390.923192ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:08:40.837957   14871 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-498000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-498000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-498000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-498000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-498000: (2.342943359s)
--- FAIL: TestRunningBinaryUpgrade (65.38s)

                                                
                                    
x
+
TestKubernetesUpgrade (564.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0127 20:09:39.547190    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:39.552305    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:39.562402    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:39.582685    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:39.622811    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:39.702938    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:39.863037    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:40.183387    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:40.823591    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:42.103850    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:44.664062    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:09:49.784225    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:10:00.024353    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:10:20.506300    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m12.183050308s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-851000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-851000 in cluster kubernetes-upgrade-851000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 20:09:36.831152   15248 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:09:36.831309   15248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:09:36.831314   15248 out.go:309] Setting ErrFile to fd 2...
	I0127 20:09:36.831318   15248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:09:36.831432   15248 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 20:09:36.831935   15248 out.go:303] Setting JSON to false
	I0127 20:09:36.850555   15248 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4150,"bootTime":1674874826,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 20:09:36.850639   15248 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:09:36.871862   15248 out.go:177] * [kubernetes-upgrade-851000] minikube v1.28.0 on Darwin 13.2
	I0127 20:09:36.914495   15248 notify.go:220] Checking for updates...
	I0127 20:09:36.935635   15248 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:09:36.956304   15248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:09:36.977660   15248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:09:36.998686   15248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:09:37.020486   15248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 20:09:37.041675   15248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:09:37.063489   15248 config.go:180] Loaded profile config "cert-expiration-664000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:09:37.063603   15248 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:09:37.124643   15248 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 20:09:37.124779   15248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:09:37.267982   15248 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 04:09:37.17470271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:09:37.310745   15248 out.go:177] * Using the docker driver based on user configuration
	I0127 20:09:37.331712   15248 start.go:296] selected driver: docker
	I0127 20:09:37.331740   15248 start.go:840] validating driver "docker" against <nil>
	I0127 20:09:37.331758   15248 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:09:37.335769   15248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:09:37.477965   15248 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 04:09:37.385950971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:09:37.478085   15248 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0127 20:09:37.478231   15248 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 20:09:37.499873   15248 out.go:177] * Using Docker Desktop driver with root privileges
	I0127 20:09:37.521872   15248 cni.go:84] Creating CNI manager for ""
	I0127 20:09:37.521910   15248 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 20:09:37.521927   15248 start_flags.go:319] config:
	{Name:kubernetes-upgrade-851000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-851000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:09:37.565609   15248 out.go:177] * Starting control plane node kubernetes-upgrade-851000 in cluster kubernetes-upgrade-851000
	I0127 20:09:37.586890   15248 cache.go:120] Beginning downloading kic base image for docker with docker
	I0127 20:09:37.608932   15248 out.go:177] * Pulling base image ...
	I0127 20:09:37.650836   15248 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0127 20:09:37.650838   15248 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:09:37.650955   15248 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0127 20:09:37.650979   15248 cache.go:57] Caching tarball of preloaded images
	I0127 20:09:37.651730   15248 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:09:37.651903   15248 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0127 20:09:37.652222   15248 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/config.json ...
	I0127 20:09:37.652278   15248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/config.json: {Name:mk9c83e67a9b3d824f7c260d2dbf18e930cc79f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:09:37.707444   15248 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0127 20:09:37.707458   15248 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0127 20:09:37.707473   15248 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:09:37.707508   15248 start.go:364] acquiring machines lock for kubernetes-upgrade-851000: {Name:mk294cdd3f3f8234709f3b17745014faff50de9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:09:37.707674   15248 start.go:368] acquired machines lock for "kubernetes-upgrade-851000" in 154.144µs
	I0127 20:09:37.707699   15248 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-851000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-851000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:09:37.707795   15248 start.go:125] createHost starting for "" (driver="docker")
	I0127 20:09:37.729807   15248 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0127 20:09:37.730183   15248 start.go:159] libmachine.API.Create for "kubernetes-upgrade-851000" (driver="docker")
	I0127 20:09:37.730240   15248 client.go:168] LocalClient.Create starting
	I0127 20:09:37.730397   15248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem
	I0127 20:09:37.730475   15248 main.go:141] libmachine: Decoding PEM data...
	I0127 20:09:37.730509   15248 main.go:141] libmachine: Parsing certificate...
	I0127 20:09:37.730604   15248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem
	I0127 20:09:37.730672   15248 main.go:141] libmachine: Decoding PEM data...
	I0127 20:09:37.730703   15248 main.go:141] libmachine: Parsing certificate...
	I0127 20:09:37.731572   15248 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-851000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 20:09:37.786510   15248 cli_runner.go:211] docker network inspect kubernetes-upgrade-851000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 20:09:37.786612   15248 network_create.go:281] running [docker network inspect kubernetes-upgrade-851000] to gather additional debugging logs...
	I0127 20:09:37.786629   15248 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-851000
	W0127 20:09:37.841762   15248 cli_runner.go:211] docker network inspect kubernetes-upgrade-851000 returned with exit code 1
	I0127 20:09:37.841793   15248 network_create.go:284] error running [docker network inspect kubernetes-upgrade-851000]: docker network inspect kubernetes-upgrade-851000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-851000
	I0127 20:09:37.841805   15248 network_create.go:286] output of [docker network inspect kubernetes-upgrade-851000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-851000
	
	** /stderr **
	I0127 20:09:37.841899   15248 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 20:09:37.898728   15248 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 20:09:37.899082   15248 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012c0900}
	I0127 20:09:37.899093   15248 network_create.go:123] attempt to create docker network kubernetes-upgrade-851000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0127 20:09:37.899162   15248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 kubernetes-upgrade-851000
	W0127 20:09:37.954891   15248 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 kubernetes-upgrade-851000 returned with exit code 1
	W0127 20:09:37.954919   15248 network_create.go:148] failed to create docker network kubernetes-upgrade-851000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 kubernetes-upgrade-851000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0127 20:09:37.954931   15248 network_create.go:115] failed to create docker network kubernetes-upgrade-851000 192.168.58.0/24, will retry: subnet is taken
	I0127 20:09:37.956474   15248 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 20:09:37.956806   15248 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001226b00}
	I0127 20:09:37.956817   15248 network_create.go:123] attempt to create docker network kubernetes-upgrade-851000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0127 20:09:37.956896   15248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 kubernetes-upgrade-851000
	W0127 20:09:38.011063   15248 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 kubernetes-upgrade-851000 returned with exit code 1
	W0127 20:09:38.011095   15248 network_create.go:148] failed to create docker network kubernetes-upgrade-851000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 kubernetes-upgrade-851000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0127 20:09:38.011113   15248 network_create.go:115] failed to create docker network kubernetes-upgrade-851000 192.168.67.0/24, will retry: subnet is taken
	I0127 20:09:38.012471   15248 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 20:09:38.012780   15248 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012e23f0}
	I0127 20:09:38.012792   15248 network_create.go:123] attempt to create docker network kubernetes-upgrade-851000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0127 20:09:38.012865   15248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 kubernetes-upgrade-851000
	I0127 20:09:38.105061   15248 network_create.go:107] docker network kubernetes-upgrade-851000 192.168.76.0/24 created
	I0127 20:09:38.105091   15248 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-851000" container
	I0127 20:09:38.105210   15248 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 20:09:38.162677   15248 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-851000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 --label created_by.minikube.sigs.k8s.io=true
	I0127 20:09:38.217998   15248 oci.go:103] Successfully created a docker volume kubernetes-upgrade-851000
	I0127 20:09:38.218104   15248 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-851000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 --entrypoint /usr/bin/test -v kubernetes-upgrade-851000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0127 20:09:38.779087   15248 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-851000
	I0127 20:09:38.779129   15248 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:09:38.779144   15248 kic.go:190] Starting extracting preloaded images to volume ...
	I0127 20:09:38.779274   15248 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-851000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 20:09:44.759531   15248 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-851000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (5.980095827s)
	I0127 20:09:44.759557   15248 kic.go:199] duration metric: took 5.980334 seconds to extract preloaded images to volume
	I0127 20:09:44.759673   15248 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 20:09:44.905171   15248 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-851000 --name kubernetes-upgrade-851000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-851000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-851000 --network kubernetes-upgrade-851000 --ip 192.168.76.2 --volume kubernetes-upgrade-851000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0127 20:09:45.271761   15248 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-851000 --format={{.State.Running}}
	I0127 20:09:45.335980   15248 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-851000 --format={{.State.Status}}
	I0127 20:09:45.400501   15248 cli_runner.go:164] Run: docker exec kubernetes-upgrade-851000 stat /var/lib/dpkg/alternatives/iptables
	I0127 20:09:45.515693   15248 oci.go:144] the created container "kubernetes-upgrade-851000" has a running status.
	I0127 20:09:45.515761   15248 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa...
	I0127 20:09:45.655785   15248 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 20:09:45.768973   15248 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-851000 --format={{.State.Status}}
	I0127 20:09:45.828592   15248 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 20:09:45.828613   15248 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-851000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 20:09:45.934957   15248 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-851000 --format={{.State.Status}}
	I0127 20:09:45.993902   15248 machine.go:88] provisioning docker machine ...
	I0127 20:09:45.993944   15248 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-851000"
	I0127 20:09:45.994053   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:46.056846   15248 main.go:141] libmachine: Using SSH client type: native
	I0127 20:09:46.057093   15248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52919 <nil> <nil>}
	I0127 20:09:46.057108   15248 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-851000 && echo "kubernetes-upgrade-851000" | sudo tee /etc/hostname
	I0127 20:09:46.234039   15248 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-851000
	
	I0127 20:09:46.234133   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:46.293653   15248 main.go:141] libmachine: Using SSH client type: native
	I0127 20:09:46.293816   15248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52919 <nil> <nil>}
	I0127 20:09:46.293830   15248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-851000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-851000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-851000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 20:09:46.430064   15248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:09:46.430085   15248 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3092/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3092/.minikube}
	I0127 20:09:46.430103   15248 ubuntu.go:177] setting up certificates
	I0127 20:09:46.430113   15248 provision.go:83] configureAuth start
	I0127 20:09:46.430192   15248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-851000
	I0127 20:09:46.488986   15248 provision.go:138] copyHostCerts
	I0127 20:09:46.489087   15248 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem, removing ...
	I0127 20:09:46.489094   15248 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem
	I0127 20:09:46.489206   15248 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem (1078 bytes)
	I0127 20:09:46.489406   15248 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem, removing ...
	I0127 20:09:46.489412   15248 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem
	I0127 20:09:46.489476   15248 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem (1123 bytes)
	I0127 20:09:46.489641   15248 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem, removing ...
	I0127 20:09:46.489646   15248 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem
	I0127 20:09:46.489709   15248 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem (1679 bytes)
	I0127 20:09:46.489830   15248 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-851000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-851000]
	I0127 20:09:46.654016   15248 provision.go:172] copyRemoteCerts
	I0127 20:09:46.654080   15248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 20:09:46.654128   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:46.711690   15248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52919 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa Username:docker}
	I0127 20:09:46.806726   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 20:09:46.825009   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0127 20:09:46.842316   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 20:09:46.870582   15248 provision.go:86] duration metric: configureAuth took 440.452004ms
	I0127 20:09:46.870595   15248 ubuntu.go:193] setting minikube options for container-runtime
	I0127 20:09:46.870757   15248 config.go:180] Loaded profile config "kubernetes-upgrade-851000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0127 20:09:46.870822   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:46.930590   15248 main.go:141] libmachine: Using SSH client type: native
	I0127 20:09:46.930755   15248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52919 <nil> <nil>}
	I0127 20:09:46.930768   15248 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 20:09:47.066817   15248 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0127 20:09:47.066834   15248 ubuntu.go:71] root file system type: overlay
	I0127 20:09:47.066994   15248 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 20:09:47.067083   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:47.124728   15248 main.go:141] libmachine: Using SSH client type: native
	I0127 20:09:47.124888   15248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52919 <nil> <nil>}
	I0127 20:09:47.124938   15248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 20:09:47.266852   15248 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 20:09:47.266958   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:47.326510   15248 main.go:141] libmachine: Using SSH client type: native
	I0127 20:09:47.326686   15248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 52919 <nil> <nil>}
	I0127 20:09:47.326699   15248 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 20:09:47.951055   15248 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:09:47.263560276 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0127 20:09:47.951085   15248 machine.go:91] provisioned docker machine in 1.957146411s
	I0127 20:09:47.951092   15248 client.go:171] LocalClient.Create took 10.220717758s
	I0127 20:09:47.951115   15248 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-851000" took 10.220807604s
	I0127 20:09:47.951127   15248 start.go:300] post-start starting for "kubernetes-upgrade-851000" (driver="docker")
	I0127 20:09:47.951132   15248 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 20:09:47.951234   15248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 20:09:47.951303   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:48.014378   15248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52919 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa Username:docker}
	I0127 20:09:48.108468   15248 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 20:09:48.112330   15248 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 20:09:48.112347   15248 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 20:09:48.112354   15248 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 20:09:48.112364   15248 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0127 20:09:48.112371   15248 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/addons for local assets ...
	I0127 20:09:48.112467   15248 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/files for local assets ...
	I0127 20:09:48.112649   15248 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> 44062.pem in /etc/ssl/certs
	I0127 20:09:48.112863   15248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 20:09:48.120524   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /etc/ssl/certs/44062.pem (1708 bytes)
	I0127 20:09:48.141366   15248 start.go:303] post-start completed in 190.225802ms
	I0127 20:09:48.141933   15248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-851000
	I0127 20:09:48.203531   15248 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/config.json ...
	I0127 20:09:48.203954   15248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 20:09:48.204018   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:48.263541   15248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52919 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa Username:docker}
	I0127 20:09:48.356612   15248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 20:09:48.361330   15248 start.go:128] duration metric: createHost completed in 10.653398478s
	I0127 20:09:48.361349   15248 start.go:83] releasing machines lock for "kubernetes-upgrade-851000", held for 10.653536573s
	I0127 20:09:48.361440   15248 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-851000
	I0127 20:09:48.419601   15248 ssh_runner.go:195] Run: cat /version.json
	I0127 20:09:48.419618   15248 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0127 20:09:48.419676   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:48.419698   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:48.484181   15248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52919 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa Username:docker}
	I0127 20:09:48.484311   15248 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52919 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa Username:docker}
	I0127 20:09:48.574261   15248 ssh_runner.go:195] Run: systemctl --version
	I0127 20:09:48.798740   15248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 20:09:48.803927   15248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 20:09:48.824475   15248 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 20:09:48.824549   15248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0127 20:09:48.838461   15248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0127 20:09:48.846341   15248 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 20:09:48.846357   15248 start.go:472] detecting cgroup driver to use...
	I0127 20:09:48.846372   15248 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 20:09:48.846467   15248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:09:48.860136   15248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0127 20:09:48.868816   15248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 20:09:48.877589   15248 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 20:09:48.877656   15248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 20:09:48.886245   15248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:09:48.894642   15248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 20:09:48.903191   15248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:09:48.911571   15248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 20:09:48.919538   15248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 20:09:48.927988   15248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 20:09:48.935405   15248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 20:09:48.942677   15248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:09:49.011655   15248 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 20:09:49.083518   15248 start.go:472] detecting cgroup driver to use...
	I0127 20:09:49.083543   15248 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 20:09:49.083611   15248 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 20:09:49.095586   15248 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0127 20:09:49.095690   15248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:09:49.106926   15248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:09:49.124044   15248 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 20:09:49.197509   15248 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 20:09:49.273101   15248 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 20:09:49.273119   15248 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 20:09:49.315670   15248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:09:49.379190   15248 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:09:49.595073   15248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:09:49.626376   15248 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:09:49.680390   15248 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	I0127 20:09:49.680548   15248 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-851000 dig +short host.docker.internal
	I0127 20:09:49.798810   15248 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0127 20:09:49.798911   15248 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0127 20:09:49.803518   15248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:09:49.814095   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:09:49.873484   15248 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:09:49.873574   15248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:09:49.898922   15248 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0127 20:09:49.898953   15248 docker.go:560] Images already preloaded, skipping extraction
	I0127 20:09:49.899042   15248 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:09:49.923934   15248 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0127 20:09:49.923953   15248 cache_images.go:84] Images are preloaded, skipping loading
	I0127 20:09:49.924049   15248 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 20:09:49.995558   15248 cni.go:84] Creating CNI manager for ""
	I0127 20:09:49.995577   15248 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 20:09:49.995594   15248 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0127 20:09:49.995613   15248 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-851000 NodeName:kubernetes-upgrade-851000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0127 20:09:49.995735   15248 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-851000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-851000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 20:09:49.995815   15248 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-851000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-851000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 20:09:49.995903   15248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0127 20:09:50.004467   15248 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 20:09:50.004524   15248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 20:09:50.012099   15248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0127 20:09:50.025609   15248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 20:09:50.039016   15248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0127 20:09:50.052626   15248 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 20:09:50.056632   15248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:09:50.066803   15248 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000 for IP: 192.168.76.2
	I0127 20:09:50.066820   15248 certs.go:186] acquiring lock for shared ca certs: {Name:mk2d86ad31f10478b3fe72eedd54ef2fcd74cf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:09:50.067019   15248 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key
	I0127 20:09:50.067096   15248 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key
	I0127 20:09:50.067139   15248 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.key
	I0127 20:09:50.067153   15248 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.crt with IP's: []
	I0127 20:09:50.186102   15248 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.crt ...
	I0127 20:09:50.186117   15248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.crt: {Name:mk66f1e1d6c32e4dd6a2588655f6829a60009cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:09:50.186427   15248 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.key ...
	I0127 20:09:50.186436   15248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.key: {Name:mk5aa0176b16370bfbb46023b5bf870eafb0c09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:09:50.186678   15248 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.key.31bdca25
	I0127 20:09:50.186694   15248 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0127 20:09:50.230705   15248 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.crt.31bdca25 ...
	I0127 20:09:50.230732   15248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.crt.31bdca25: {Name:mk4a0ee8a8e78e9a610bf93f3869a3f7fd977374 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:09:50.230983   15248 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.key.31bdca25 ...
	I0127 20:09:50.230991   15248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.key.31bdca25: {Name:mkd561c918f4d38d45d8c238b7f6599c9a5738f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:09:50.231178   15248 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.crt
	I0127 20:09:50.231355   15248 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.key
	I0127 20:09:50.231513   15248 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/proxy-client.key
	I0127 20:09:50.231526   15248 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/proxy-client.crt with IP's: []
	I0127 20:09:50.345847   15248 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/proxy-client.crt ...
	I0127 20:09:50.345858   15248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/proxy-client.crt: {Name:mk5ce5c55ebd579e1a034c0246bec69fbf2b6b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:09:50.346141   15248 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/proxy-client.key ...
	I0127 20:09:50.346148   15248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/proxy-client.key: {Name:mke0b223360c1aaa95f070d9ba522b3f4754a175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:09:50.346557   15248 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem (1338 bytes)
	W0127 20:09:50.346609   15248 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406_empty.pem, impossibly tiny 0 bytes
	I0127 20:09:50.346622   15248 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 20:09:50.346656   15248 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem (1078 bytes)
	I0127 20:09:50.346688   15248 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem (1123 bytes)
	I0127 20:09:50.346722   15248 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem (1679 bytes)
	I0127 20:09:50.346788   15248 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem (1708 bytes)
	I0127 20:09:50.347299   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 20:09:50.366606   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 20:09:50.384091   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 20:09:50.401670   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 20:09:50.419426   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 20:09:50.436800   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 20:09:50.454305   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 20:09:50.472752   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 20:09:50.490511   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 20:09:50.508777   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem --> /usr/share/ca-certificates/4406.pem (1338 bytes)
	I0127 20:09:50.526750   15248 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /usr/share/ca-certificates/44062.pem (1708 bytes)
	I0127 20:09:50.544716   15248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 20:09:50.557980   15248 ssh_runner.go:195] Run: openssl version
	I0127 20:09:50.564851   15248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 20:09:50.573562   15248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:09:50.577635   15248 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:31 /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:09:50.577687   15248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:09:50.583440   15248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 20:09:50.591776   15248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4406.pem && ln -fs /usr/share/ca-certificates/4406.pem /etc/ssl/certs/4406.pem"
	I0127 20:09:50.599868   15248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4406.pem
	I0127 20:09:50.603964   15248 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:36 /usr/share/ca-certificates/4406.pem
	I0127 20:09:50.604022   15248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4406.pem
	I0127 20:09:50.609776   15248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4406.pem /etc/ssl/certs/51391683.0"
	I0127 20:09:50.618488   15248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44062.pem && ln -fs /usr/share/ca-certificates/44062.pem /etc/ssl/certs/44062.pem"
	I0127 20:09:50.627010   15248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44062.pem
	I0127 20:09:50.631184   15248 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:36 /usr/share/ca-certificates/44062.pem
	I0127 20:09:50.631225   15248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44062.pem
	I0127 20:09:50.636792   15248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44062.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 20:09:50.645035   15248 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-851000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-851000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:09:50.645145   15248 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 20:09:50.668413   15248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 20:09:50.676540   15248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 20:09:50.685028   15248 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0127 20:09:50.685096   15248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:09:50.693250   15248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 20:09:50.693279   15248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 20:09:50.742182   15248 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0127 20:09:50.742264   15248 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 20:09:51.050301   15248 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 20:09:51.050433   15248 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 20:09:51.050584   15248 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 20:09:51.278831   15248 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 20:09:51.280101   15248 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 20:09:51.286722   15248 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0127 20:09:51.356905   15248 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 20:09:51.398252   15248 out.go:204]   - Generating certificates and keys ...
	I0127 20:09:51.398385   15248 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 20:09:51.398480   15248 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 20:09:51.618116   15248 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 20:09:51.988017   15248 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0127 20:09:52.083663   15248 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0127 20:09:52.309691   15248 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0127 20:09:52.382708   15248 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0127 20:09:52.382834   15248 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-851000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 20:09:52.597272   15248 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0127 20:09:52.597765   15248 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-851000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 20:09:52.837101   15248 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 20:09:53.317112   15248 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 20:09:53.461710   15248 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0127 20:09:53.461795   15248 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 20:09:53.684812   15248 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 20:09:53.741281   15248 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 20:09:53.889902   15248 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 20:09:54.108157   15248 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 20:09:54.108862   15248 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 20:09:54.151188   15248 out.go:204]   - Booting up control plane ...
	I0127 20:09:54.151390   15248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 20:09:54.151513   15248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 20:09:54.151635   15248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 20:09:54.151789   15248 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 20:09:54.152031   15248 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 20:10:34.117506   15248 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0127 20:10:34.117930   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:10:34.118120   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:10:39.120176   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:10:39.120420   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:10:49.120831   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:10:49.120998   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:11:09.121261   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:11:09.121495   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:11:49.121734   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:11:49.121939   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:11:49.121953   15248 kubeadm.go:322] 
	I0127 20:11:49.121994   15248 kubeadm.go:322] Unfortunately, an error has occurred:
	I0127 20:11:49.122064   15248 kubeadm.go:322] 	timed out waiting for the condition
	I0127 20:11:49.122073   15248 kubeadm.go:322] 
	I0127 20:11:49.122124   15248 kubeadm.go:322] This error is likely caused by:
	I0127 20:11:49.122153   15248 kubeadm.go:322] 	- The kubelet is not running
	I0127 20:11:49.122237   15248 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 20:11:49.122245   15248 kubeadm.go:322] 
	I0127 20:11:49.122349   15248 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 20:11:49.122391   15248 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0127 20:11:49.122435   15248 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0127 20:11:49.122444   15248 kubeadm.go:322] 
	I0127 20:11:49.122565   15248 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 20:11:49.122655   15248 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0127 20:11:49.122720   15248 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0127 20:11:49.122761   15248 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0127 20:11:49.122817   15248 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0127 20:11:49.122846   15248 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0127 20:11:49.125608   15248 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0127 20:11:49.125717   15248 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0127 20:11:49.125859   15248 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0127 20:11:49.125983   15248 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 20:11:49.126064   15248 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 20:11:49.126139   15248 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0127 20:11:49.126326   15248 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-851000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-851000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-851000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-851000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 20:11:49.126375   15248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0127 20:11:49.548869   15248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:11:49.558995   15248 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0127 20:11:49.559054   15248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:11:49.567724   15248 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 20:11:49.567773   15248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 20:11:49.617548   15248 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0127 20:11:49.617616   15248 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 20:11:50.063164   15248 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 20:11:50.063250   15248 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 20:11:50.063330   15248 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 20:11:50.354427   15248 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 20:11:50.354711   15248 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 20:11:50.363744   15248 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0127 20:11:50.435335   15248 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 20:11:50.459814   15248 out.go:204]   - Generating certificates and keys ...
	I0127 20:11:50.459913   15248 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 20:11:50.460012   15248 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 20:11:50.460095   15248 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 20:11:50.460167   15248 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0127 20:11:50.460243   15248 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 20:11:50.460345   15248 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0127 20:11:50.460413   15248 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0127 20:11:50.460484   15248 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0127 20:11:50.460573   15248 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 20:11:50.460678   15248 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 20:11:50.460734   15248 kubeadm.go:322] [certs] Using the existing "sa" key
	I0127 20:11:50.460795   15248 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 20:11:50.659723   15248 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 20:11:50.748695   15248 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 20:11:50.856862   15248 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 20:11:51.177063   15248 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 20:11:51.177706   15248 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 20:11:51.223858   15248 out.go:204]   - Booting up control plane ...
	I0127 20:11:51.224045   15248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 20:11:51.224243   15248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 20:11:51.224363   15248 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 20:11:51.224512   15248 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 20:11:51.224791   15248 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 20:12:31.187441   15248 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0127 20:12:31.188182   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:12:31.188355   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:12:36.189219   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:12:36.189416   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:12:46.189800   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:12:46.189972   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:13:06.190418   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:13:06.190575   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:13:46.191486   15248 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:13:46.191692   15248 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:13:46.191705   15248 kubeadm.go:322] 
	I0127 20:13:46.191757   15248 kubeadm.go:322] Unfortunately, an error has occurred:
	I0127 20:13:46.191793   15248 kubeadm.go:322] 	timed out waiting for the condition
	I0127 20:13:46.191806   15248 kubeadm.go:322] 
	I0127 20:13:46.191840   15248 kubeadm.go:322] This error is likely caused by:
	I0127 20:13:46.191881   15248 kubeadm.go:322] 	- The kubelet is not running
	I0127 20:13:46.192021   15248 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 20:13:46.192033   15248 kubeadm.go:322] 
	I0127 20:13:46.192131   15248 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 20:13:46.192160   15248 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0127 20:13:46.192186   15248 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0127 20:13:46.192196   15248 kubeadm.go:322] 
	I0127 20:13:46.192284   15248 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 20:13:46.192357   15248 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0127 20:13:46.192446   15248 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0127 20:13:46.192492   15248 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0127 20:13:46.192580   15248 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0127 20:13:46.192636   15248 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0127 20:13:46.197867   15248 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0127 20:13:46.198009   15248 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0127 20:13:46.198200   15248 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0127 20:13:46.198321   15248 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 20:13:46.198412   15248 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 20:13:46.198493   15248 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0127 20:13:46.198508   15248 kubeadm.go:403] StartCluster complete in 3m55.554344459s
	I0127 20:13:46.198615   15248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:13:46.234257   15248 logs.go:279] 0 containers: []
	W0127 20:13:46.234276   15248 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:13:46.234373   15248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:13:46.263083   15248 logs.go:279] 0 containers: []
	W0127 20:13:46.263100   15248 logs.go:281] No container was found matching "etcd"
	I0127 20:13:46.263220   15248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:13:46.294321   15248 logs.go:279] 0 containers: []
	W0127 20:13:46.294339   15248 logs.go:281] No container was found matching "coredns"
	I0127 20:13:46.294435   15248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:13:46.333013   15248 logs.go:279] 0 containers: []
	W0127 20:13:46.333032   15248 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:13:46.333117   15248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:13:46.364508   15248 logs.go:279] 0 containers: []
	W0127 20:13:46.364523   15248 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:13:46.364604   15248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:13:46.394247   15248 logs.go:279] 0 containers: []
	W0127 20:13:46.394266   15248 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:13:46.394357   15248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:13:46.422699   15248 logs.go:279] 0 containers: []
	W0127 20:13:46.422717   15248 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:13:46.422827   15248 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:13:46.457782   15248 logs.go:279] 0 containers: []
	W0127 20:13:46.457799   15248 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:13:46.457815   15248 logs.go:124] Gathering logs for kubelet ...
	I0127 20:13:46.457823   15248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:13:46.512822   15248 logs.go:124] Gathering logs for dmesg ...
	I0127 20:13:46.512846   15248 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:13:46.532324   15248 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:13:46.532339   15248 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:13:46.613274   15248 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:13:46.613288   15248 logs.go:124] Gathering logs for Docker ...
	I0127 20:13:46.613295   15248 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:13:46.641754   15248 logs.go:124] Gathering logs for container status ...
	I0127 20:13:46.641774   15248 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:13:48.704624   15248 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062841034s)
	W0127 20:13:48.704814   15248 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 20:13:48.704872   15248 out.go:239] * 
	* 
	W0127 20:13:48.705060   15248 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 20:13:48.705096   15248 out.go:239] * 
	* 
	W0127 20:13:48.705969   15248 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 20:13:48.799857   15248 out.go:177] 
	W0127 20:13:48.869188   15248 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 20:13:48.869374   15248 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 20:13:48.869491   15248 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 20:13:48.911541   15248 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-851000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-851000: (1.644655156s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-851000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-851000 status --format={{.Host}}: exit status 7 (124.344536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (4m41.673306209s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-851000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (633.498833ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-851000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-851000
	    minikube start -p kubernetes-upgrade-851000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8510002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-851000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-851000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (20.713254498s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-01-27 20:18:53.869716 -0800 PST m=+2922.862513248
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-851000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-851000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "32fbd53635a67ec5b666a00a263c89b706550256d9024811eee27e0872b41257",
	        "Created": "2023-01-28T04:09:44.960957891Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200694,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:13:52.321405979Z",
	            "FinishedAt": "2023-01-28T04:13:49.539790304Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/32fbd53635a67ec5b666a00a263c89b706550256d9024811eee27e0872b41257/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/32fbd53635a67ec5b666a00a263c89b706550256d9024811eee27e0872b41257/hostname",
	        "HostsPath": "/var/lib/docker/containers/32fbd53635a67ec5b666a00a263c89b706550256d9024811eee27e0872b41257/hosts",
	        "LogPath": "/var/lib/docker/containers/32fbd53635a67ec5b666a00a263c89b706550256d9024811eee27e0872b41257/32fbd53635a67ec5b666a00a263c89b706550256d9024811eee27e0872b41257-json.log",
	        "Name": "/kubernetes-upgrade-851000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-851000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-851000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c78e3464f2748393c79f22870d8dfa9050cb54bb8dc183089cfa716954c68418-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c78e3464f2748393c79f22870d8dfa9050cb54bb8dc183089cfa716954c68418/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c78e3464f2748393c79f22870d8dfa9050cb54bb8dc183089cfa716954c68418/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c78e3464f2748393c79f22870d8dfa9050cb54bb8dc183089cfa716954c68418/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-851000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-851000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-851000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-851000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-851000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bada2b1325badb6f3ce5f296192a5ee23710932696e4a02b37b084c7abb660ee",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53172"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53173"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53174"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53175"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53176"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bada2b1325ba",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-851000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "32fbd53635a6",
	                        "kubernetes-upgrade-851000"
	                    ],
	                    "NetworkID": "02b6f06557f7b7c83ec3ec0042bcf65036015c777d36425d9af6efcb0fc750a5",
	                    "EndpointID": "60ecb8e09197e44b0fd35475f03d55bc3a1d951371f2dcbcfb84abb3313eb77f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-851000 -n kubernetes-upgrade-851000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-851000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-851000 logs -n 25: (3.631284636s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo cat                            | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo cat                            | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-851000                         | kubernetes-upgrade-851000 | jenkins | v1.28.0 | 27 Jan 23 20:18 PST |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-851000                         | kubernetes-upgrade-851000 | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo cat                            | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo docker                         | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo cat                            | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo cat                            | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo cat                            | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo cat                            | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo                                | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo find                           | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-259000 sudo crio                           | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p calico-259000                                     | calico-259000             | jenkins | v1.28.0 | 27 Jan 23 20:18 PST | 27 Jan 23 20:18 PST |
	| start   | -p custom-flannel-259000                             | custom-flannel-259000     | jenkins | v1.28.0 | 27 Jan 23 20:18 PST |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 20:18:44
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 20:18:44.707564   18275 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:18:44.707719   18275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:18:44.707725   18275 out.go:309] Setting ErrFile to fd 2...
	I0127 20:18:44.707729   18275 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:18:44.707835   18275 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 20:18:44.708394   18275 out.go:303] Setting JSON to false
	I0127 20:18:44.728063   18275 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4698,"bootTime":1674874826,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 20:18:44.728155   18275 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:18:44.771345   18275 out.go:177] * [custom-flannel-259000] minikube v1.28.0 on Darwin 13.2
	I0127 20:18:44.794895   18275 notify.go:220] Checking for updates...
	I0127 20:18:44.816597   18275 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:18:44.837687   18275 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:18:44.858706   18275 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:18:44.881519   18275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:18:44.939668   18275 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 20:18:44.999408   18275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:18:45.021104   18275 config.go:180] Loaded profile config "kubernetes-upgrade-851000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:18:45.021165   18275 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:18:45.095266   18275 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 20:18:45.095399   18275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:18:45.270041   18275 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 04:18:45.156630572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:18:45.315341   18275 out.go:177] * Using the docker driver based on user configuration
	I0127 20:18:45.336480   18275 start.go:296] selected driver: docker
	I0127 20:18:45.336493   18275 start.go:840] validating driver "docker" against <nil>
	I0127 20:18:45.336503   18275 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:18:45.339688   18275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:18:45.498064   18275 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 04:18:45.397076192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:18:45.498202   18275 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0127 20:18:45.498357   18275 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 20:18:45.521607   18275 out.go:177] * Using Docker Desktop driver with root privileges
	I0127 20:18:45.542720   18275 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0127 20:18:45.542789   18275 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0127 20:18:45.542805   18275 start_flags.go:319] config:
	{Name:custom-flannel-259000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:custom-flannel-259000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP:}
	I0127 20:18:45.563790   18275 out.go:177] * Starting control plane node custom-flannel-259000 in cluster custom-flannel-259000
	I0127 20:18:45.584586   18275 cache.go:120] Beginning downloading kic base image for docker with docker
	I0127 20:18:45.605798   18275 out.go:177] * Pulling base image ...
	I0127 20:18:45.647653   18275 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:18:45.647686   18275 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0127 20:18:45.647698   18275 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0127 20:18:45.647713   18275 cache.go:57] Caching tarball of preloaded images
	I0127 20:18:45.647863   18275 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:18:45.647876   18275 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0127 20:18:45.648561   18275 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/config.json ...
	I0127 20:18:45.648652   18275 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/config.json: {Name:mk21843b54a49d6a6347fb7ba87199da7085cf7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:18:45.717236   18275 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0127 20:18:45.717260   18275 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0127 20:18:45.717287   18275 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:18:45.717355   18275 start.go:364] acquiring machines lock for custom-flannel-259000: {Name:mk1c111539502c52cb02069a229f85b321846f12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:18:45.717604   18275 start.go:368] acquired machines lock for "custom-flannel-259000" in 231.482µs
	I0127 20:18:45.717662   18275 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-259000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:custom-flannel-259000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:18:45.717813   18275 start.go:125] createHost starting for "" (driver="docker")
	I0127 20:18:43.602866   18021 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53176/healthz ...
	I0127 20:18:43.612884   18021 api_server.go:278] https://127.0.0.1:53176/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 20:18:43.612906   18021 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:53176/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 20:18:44.086128   18021 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53176/healthz ...
	I0127 20:18:44.092523   18021 api_server.go:278] https://127.0.0.1:53176/healthz returned 200:
	ok
	I0127 20:18:44.106401   18021 system_pods.go:86] 5 kube-system pods found
	I0127 20:18:44.106420   18021 system_pods.go:89] "etcd-kubernetes-upgrade-851000" [e16d6f50-45da-459d-9e48-27b28d3f917b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 20:18:44.106430   18021 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-851000" [21c505f8-a73a-45ba-9fc8-bbb8317fb880] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 20:18:44.106442   18021 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-851000" [c7380d2f-a2e7-441d-bfb9-fc28f544b377] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 20:18:44.106449   18021 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-851000" [45cb66f6-632b-4293-ba42-a367ba11586f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 20:18:44.106454   18021 system_pods.go:89] "storage-provisioner" [9a67e935-eac4-4c9e-a4b8-d9cde9515692] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0127 20:18:44.106461   18021 kubeadm.go:617] needs reconfigure: missing components: kube-dns, kube-proxy
	I0127 20:18:44.106485   18021 kubeadm.go:1120] stopping kube-system containers ...
	I0127 20:18:44.106567   18021 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 20:18:44.135797   18021 docker.go:456] Stopping containers: [b99ee961a6b1 04f1a9af433d f43c8a49506d 725213254f27 1a8079dec470 f9438feca628 4906277f9723 4495b8a8ec33 87be7d269390 c4114c8a7824 e461778cab16 8a0329ea1a49 222b03a0b3ab d72a94f99e0b 26f4fce98139 03d7506982a2 d126c9b4a17a]
	I0127 20:18:44.135884   18021 ssh_runner.go:195] Run: docker stop b99ee961a6b1 04f1a9af433d f43c8a49506d 725213254f27 1a8079dec470 f9438feca628 4906277f9723 4495b8a8ec33 87be7d269390 c4114c8a7824 e461778cab16 8a0329ea1a49 222b03a0b3ab d72a94f99e0b 26f4fce98139 03d7506982a2 d126c9b4a17a
	I0127 20:18:45.150503   18021 ssh_runner.go:235] Completed: docker stop b99ee961a6b1 04f1a9af433d f43c8a49506d 725213254f27 1a8079dec470 f9438feca628 4906277f9723 4495b8a8ec33 87be7d269390 c4114c8a7824 e461778cab16 8a0329ea1a49 222b03a0b3ab d72a94f99e0b 26f4fce98139 03d7506982a2 d126c9b4a17a: (1.01460147s)
	I0127 20:18:45.150658   18021 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 20:18:45.239420   18021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:18:45.251114   18021 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 28 04:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan 28 04:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jan 28 04:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jan 28 04:18 /etc/kubernetes/scheduler.conf
	
	I0127 20:18:45.251224   18021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 20:18:45.319874   18021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 20:18:45.331924   18021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 20:18:45.341378   18021 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:18:45.341422   18021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 20:18:45.351435   18021 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 20:18:45.361353   18021 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:18:45.361426   18021 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 20:18:45.371146   18021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 20:18:45.413424   18021 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0127 20:18:45.413442   18021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:18:45.472218   18021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:18:46.557017   18021 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084784605s)
	I0127 20:18:46.557034   18021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:18:46.716971   18021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:18:46.813331   18021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:18:46.918786   18021 api_server.go:51] waiting for apiserver process to appear ...
	I0127 20:18:46.918865   18021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:18:47.433214   18021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:18:47.933526   18021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:18:47.949258   18021 api_server.go:71] duration metric: took 1.030521116s to wait for apiserver process to appear ...
	I0127 20:18:47.949281   18021 api_server.go:87] waiting for apiserver healthz status ...
	I0127 20:18:47.949302   18021 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53176/healthz ...
	I0127 20:18:45.760740   18275 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0127 20:18:45.760992   18275 start.go:159] libmachine.API.Create for "custom-flannel-259000" (driver="docker")
	I0127 20:18:45.761017   18275 client.go:168] LocalClient.Create starting
	I0127 20:18:45.761721   18275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem
	I0127 20:18:45.761978   18275 main.go:141] libmachine: Decoding PEM data...
	I0127 20:18:45.762020   18275 main.go:141] libmachine: Parsing certificate...
	I0127 20:18:45.762095   18275 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem
	I0127 20:18:45.762154   18275 main.go:141] libmachine: Decoding PEM data...
	I0127 20:18:45.762171   18275 main.go:141] libmachine: Parsing certificate...
	I0127 20:18:45.763023   18275 cli_runner.go:164] Run: docker network inspect custom-flannel-259000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 20:18:45.824693   18275 cli_runner.go:211] docker network inspect custom-flannel-259000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 20:18:45.824831   18275 network_create.go:281] running [docker network inspect custom-flannel-259000] to gather additional debugging logs...
	I0127 20:18:45.824852   18275 cli_runner.go:164] Run: docker network inspect custom-flannel-259000
	W0127 20:18:45.885117   18275 cli_runner.go:211] docker network inspect custom-flannel-259000 returned with exit code 1
	I0127 20:18:45.885159   18275 network_create.go:284] error running [docker network inspect custom-flannel-259000]: docker network inspect custom-flannel-259000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-flannel-259000
	I0127 20:18:45.885179   18275 network_create.go:286] output of [docker network inspect custom-flannel-259000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-flannel-259000
	
	** /stderr **
	I0127 20:18:45.885277   18275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 20:18:45.948682   18275 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 20:18:45.949102   18275 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012916f0}
	I0127 20:18:45.949116   18275 network_create.go:123] attempt to create docker network custom-flannel-259000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0127 20:18:45.949196   18275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-259000 custom-flannel-259000
	W0127 20:18:46.015402   18275 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-259000 custom-flannel-259000 returned with exit code 1
	W0127 20:18:46.015454   18275 network_create.go:148] failed to create docker network custom-flannel-259000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-259000 custom-flannel-259000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0127 20:18:46.015480   18275 network_create.go:115] failed to create docker network custom-flannel-259000 192.168.58.0/24, will retry: subnet is taken
	I0127 20:18:46.016983   18275 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 20:18:46.017395   18275 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004f6760}
	I0127 20:18:46.017408   18275 network_create.go:123] attempt to create docker network custom-flannel-259000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0127 20:18:46.017484   18275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-259000 custom-flannel-259000
	I0127 20:18:46.118235   18275 network_create.go:107] docker network custom-flannel-259000 192.168.67.0/24 created
	I0127 20:18:46.118277   18275 kic.go:117] calculated static IP "192.168.67.2" for the "custom-flannel-259000" container
	I0127 20:18:46.118409   18275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 20:18:46.183368   18275 cli_runner.go:164] Run: docker volume create custom-flannel-259000 --label name.minikube.sigs.k8s.io=custom-flannel-259000 --label created_by.minikube.sigs.k8s.io=true
	I0127 20:18:46.244858   18275 oci.go:103] Successfully created a docker volume custom-flannel-259000
	I0127 20:18:46.244976   18275 cli_runner.go:164] Run: docker run --rm --name custom-flannel-259000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-259000 --entrypoint /usr/bin/test -v custom-flannel-259000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0127 20:18:46.738990   18275 oci.go:107] Successfully prepared a docker volume custom-flannel-259000
	I0127 20:18:46.739028   18275 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:18:46.739043   18275 kic.go:190] Starting extracting preloaded images to volume ...
	I0127 20:18:46.739175   18275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-259000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 20:18:50.728395   18021 api_server.go:278] https://127.0.0.1:53176/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 20:18:50.728416   18021 api_server.go:102] status: https://127.0.0.1:53176/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 20:18:51.228647   18021 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53176/healthz ...
	I0127 20:18:51.235859   18021 api_server.go:278] https://127.0.0.1:53176/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 20:18:51.235880   18021 api_server.go:102] status: https://127.0.0.1:53176/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 20:18:51.728530   18021 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53176/healthz ...
	I0127 20:18:51.734410   18021 api_server.go:278] https://127.0.0.1:53176/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 20:18:51.734430   18021 api_server.go:102] status: https://127.0.0.1:53176/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 20:18:52.228706   18021 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53176/healthz ...
	I0127 20:18:52.235721   18021 api_server.go:278] https://127.0.0.1:53176/healthz returned 200:
	ok
	I0127 20:18:52.244984   18021 api_server.go:140] control plane version: v1.26.1
	I0127 20:18:52.245006   18021 api_server.go:130] duration metric: took 4.295733331s to wait for apiserver health ...
	I0127 20:18:52.245015   18021 cni.go:84] Creating CNI manager for ""
	I0127 20:18:52.245026   18021 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:18:52.275894   18021 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 20:18:52.297560   18021 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 20:18:52.307398   18021 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0127 20:18:52.327466   18021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 20:18:52.334375   18021 system_pods.go:59] 5 kube-system pods found
	I0127 20:18:52.334396   18021 system_pods.go:61] "etcd-kubernetes-upgrade-851000" [e16d6f50-45da-459d-9e48-27b28d3f917b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 20:18:52.334404   18021 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-851000" [21c505f8-a73a-45ba-9fc8-bbb8317fb880] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 20:18:52.334410   18021 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-851000" [c7380d2f-a2e7-441d-bfb9-fc28f544b377] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 20:18:52.334416   18021 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-851000" [45cb66f6-632b-4293-ba42-a367ba11586f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 20:18:52.334422   18021 system_pods.go:61] "storage-provisioner" [9a67e935-eac4-4c9e-a4b8-d9cde9515692] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0127 20:18:52.334427   18021 system_pods.go:74] duration metric: took 6.950952ms to wait for pod list to return data ...
	I0127 20:18:52.334434   18021 node_conditions.go:102] verifying NodePressure condition ...
	I0127 20:18:52.337733   18021 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0127 20:18:52.337748   18021 node_conditions.go:123] node cpu capacity is 6
	I0127 20:18:52.337765   18021 node_conditions.go:105] duration metric: took 3.32597ms to run NodePressure ...
	I0127 20:18:52.337781   18021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:18:52.533847   18021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 20:18:52.545398   18021 ops.go:34] apiserver oom_adj: -16
	I0127 20:18:52.545421   18021 kubeadm.go:637] restartCluster took 11.311879604s
	I0127 20:18:52.545447   18021 kubeadm.go:403] StartCluster complete in 11.396986271s
	I0127 20:18:52.545472   18021 settings.go:142] acquiring lock: {Name:mk92099370375c5a2a7c1c2d1ac11f51c379e71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:18:52.545589   18021 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:18:52.546208   18021 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/kubeconfig: {Name:mkdfca390fbcfbb59336162afe07d375994efabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:18:52.546542   18021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 20:18:52.546614   18021 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0127 20:18:52.546719   18021 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-851000"
	I0127 20:18:52.546721   18021 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-851000"
	I0127 20:18:52.546735   18021 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-851000"
	W0127 20:18:52.546741   18021 addons.go:236] addon storage-provisioner should already be in state true
	I0127 20:18:52.546743   18021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-851000"
	I0127 20:18:52.546771   18021 config.go:180] Loaded profile config "kubernetes-upgrade-851000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:18:52.546780   18021 host.go:66] Checking if "kubernetes-upgrade-851000" exists ...
	I0127 20:18:52.547209   18021 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-851000 --format={{.State.Status}}
	I0127 20:18:52.547223   18021 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-851000 --format={{.State.Status}}
	I0127 20:18:52.547314   18021 kapi.go:59] client config for kubernetes-upgrade-851000: &rest.Config{Host:"https://127.0.0.1:53176", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 20:18:52.554904   18021 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-851000" context rescaled to 1 replicas
	I0127 20:18:52.554945   18021 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:18:52.600153   18021 out.go:177] * Verifying Kubernetes components...
	I0127 20:18:52.621229   18021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:18:52.660649   18021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 20:18:52.667267   18021 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 20:18:52.667283   18021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 20:18:52.667403   18021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:18:52.670471   18021 kapi.go:59] client config for kubernetes-upgrade-851000: &rest.Config{Host:"https://127.0.0.1:53176", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubernetes-upgrade-851000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 20:18:52.746227   18021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53172 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa Username:docker}
	I0127 20:18:52.748556   18021 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-851000"
	W0127 20:18:52.748572   18021 addons.go:236] addon default-storageclass should already be in state true
	I0127 20:18:52.748588   18021 host.go:66] Checking if "kubernetes-upgrade-851000" exists ...
	I0127 20:18:52.749260   18021 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-851000 --format={{.State.Status}}
	I0127 20:18:52.750139   18021 start.go:881] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0127 20:18:52.750158   18021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:18:52.842940   18021 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 20:18:52.842968   18021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 20:18:52.842968   18021 api_server.go:51] waiting for apiserver process to appear ...
	I0127 20:18:52.843056   18021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:18:52.843099   18021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-851000
	I0127 20:18:52.859206   18021 api_server.go:71] duration metric: took 304.229667ms to wait for apiserver process to appear ...
	I0127 20:18:52.859232   18021 api_server.go:87] waiting for apiserver healthz status ...
	I0127 20:18:52.859244   18021 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:53176/healthz ...
	I0127 20:18:52.865274   18021 api_server.go:278] https://127.0.0.1:53176/healthz returned 200:
	ok
	I0127 20:18:52.867420   18021 api_server.go:140] control plane version: v1.26.1
	I0127 20:18:52.867458   18021 api_server.go:130] duration metric: took 8.215028ms to wait for apiserver health ...
	I0127 20:18:52.867467   18021 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 20:18:52.873197   18021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 20:18:52.877700   18021 system_pods.go:59] 5 kube-system pods found
	I0127 20:18:52.877729   18021 system_pods.go:61] "etcd-kubernetes-upgrade-851000" [e16d6f50-45da-459d-9e48-27b28d3f917b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 20:18:52.877740   18021 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-851000" [21c505f8-a73a-45ba-9fc8-bbb8317fb880] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 20:18:52.877755   18021 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-851000" [c7380d2f-a2e7-441d-bfb9-fc28f544b377] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 20:18:52.877761   18021 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-851000" [45cb66f6-632b-4293-ba42-a367ba11586f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 20:18:52.877766   18021 system_pods.go:61] "storage-provisioner" [9a67e935-eac4-4c9e-a4b8-d9cde9515692] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0127 20:18:52.877770   18021 system_pods.go:74] duration metric: took 10.298346ms to wait for pod list to return data ...
	I0127 20:18:52.877778   18021 kubeadm.go:578] duration metric: took 322.808766ms to wait for : map[apiserver:true system_pods:true] ...
	I0127 20:18:52.877789   18021 node_conditions.go:102] verifying NodePressure condition ...
	I0127 20:18:52.886595   18021 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0127 20:18:52.886610   18021 node_conditions.go:123] node cpu capacity is 6
	I0127 20:18:52.886617   18021 node_conditions.go:105] duration metric: took 8.824098ms to run NodePressure ...
	I0127 20:18:52.886626   18021 start.go:226] waiting for startup goroutines ...
	I0127 20:18:52.926159   18021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53172 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/kubernetes-upgrade-851000/id_rsa Username:docker}
	I0127 20:18:53.048217   18021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 20:18:53.659398   18021 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 20:18:53.718329   18021 addons.go:488] enableAddons completed in 1.171720495s
	I0127 20:18:53.719301   18021 ssh_runner.go:195] Run: rm -f paused
	I0127 20:18:53.769609   18021 start.go:538] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0127 20:18:53.793270   18021 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-851000" cluster and "default" namespace by default
	I0127 20:18:54.359263   18275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-259000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (7.620046456s)
	I0127 20:18:54.359284   18275 kic.go:199] duration metric: took 7.620275 seconds to extract preloaded images to volume
	I0127 20:18:54.359399   18275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 20:18:54.563870   18275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-259000 --name custom-flannel-259000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-259000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-259000 --network custom-flannel-259000 --ip 192.168.67.2 --volume custom-flannel-259000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 04:13:52 UTC, end at Sat 2023-01-28 04:18:55 UTC. --
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.401722474Z" level=info msg="Starting up"
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.403335245Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.403372710Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.403388970Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.403396025Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.404499995Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.404540360Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.404555953Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.404562291Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.413189588Z" level=info msg="Loading containers: start."
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.527108710Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.568669081Z" level=info msg="Loading containers: done."
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.583686548Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.583846985Z" level=info msg="Daemon has completed initialization"
	Jan 28 04:18:38 kubernetes-upgrade-851000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.614005752Z" level=info msg="API listen on [::]:2376"
	Jan 28 04:18:38 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:38.623173621Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 28 04:18:44 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:44.234018900Z" level=info msg="ignoring event" container=4495b8a8ec334a24a767139a87e32bca2fe7d4d2e8aad06225956a436493eb6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 04:18:44 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:44.236028842Z" level=info msg="ignoring event" container=f43c8a49506d393f43dc049296de31ce75b0281fdff073c441870af1f612cba0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 04:18:44 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:44.241673101Z" level=info msg="ignoring event" container=f9438feca628035fcee65a3b2cc4e66f76bca089e60f8a1cea63384b7d438b54 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 04:18:44 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:44.241854170Z" level=info msg="ignoring event" container=4906277f9723e91ede3b7c804750148a0596f5f167ab178e3545148453a8c319 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 04:18:44 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:44.244426116Z" level=info msg="ignoring event" container=1a8079dec4705ed28a5378f6738f5638f880c8ebf0260b4085632c8e3898ec1c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 04:18:44 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:44.247518003Z" level=info msg="ignoring event" container=04f1a9af433dc3d4e25dce762c80eb715e495aad651e385bcf67c27dd251d5fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 04:18:44 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:44.311224668Z" level=info msg="ignoring event" container=b99ee961a6b12e546222cb770157bc46432c4b677d857dee758646546d0f34de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 04:18:45 kubernetes-upgrade-851000 dockerd[11605]: time="2023-01-28T04:18:45.074639794Z" level=info msg="ignoring event" container=725213254f27620ef139d620c82d4322c6e9fe72c5d6ab0ebf984c6dd5962477 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	be76cc7e4fa6b       e9c08e11b07f6       8 seconds ago       Running             kube-controller-manager   2                   7e25fac280c05
	02a0fb31be0eb       deb04688c4a35       8 seconds ago       Running             kube-apiserver            2                   615cdfa23f6fc
	ef9f3885791b1       fce326961ae2d       8 seconds ago       Running             etcd                      2                   10ec41a48bea8
	6094805b6c481       655493523f607       8 seconds ago       Running             kube-scheduler            2                   178349e49d774
	b99ee961a6b12       655493523f607       16 seconds ago      Exited              kube-scheduler            1                   4906277f9723e
	04f1a9af433dc       fce326961ae2d       16 seconds ago      Exited              etcd                      1                   f9438feca6280
	f43c8a49506d3       e9c08e11b07f6       16 seconds ago      Exited              kube-controller-manager   1                   1a8079dec4705
	725213254f276       deb04688c4a35       16 seconds ago      Exited              kube-apiserver            1                   4495b8a8ec334
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-851000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-851000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a22b9432724c1a7c0bfc1f92a18db163006c245
	                    minikube.k8s.io/name=kubernetes-upgrade-851000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_27T20_18_30_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 04:18:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-851000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 04:18:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 04:18:50 +0000   Sat, 28 Jan 2023 04:18:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 04:18:50 +0000   Sat, 28 Jan 2023 04:18:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 04:18:50 +0000   Sat, 28 Jan 2023 04:18:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 04:18:50 +0000   Sat, 28 Jan 2023 04:18:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-851000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 11af74b3a18d4d7295d17813eccf6dd7
	  System UUID:                11af74b3a18d4d7295d17813eccf6dd7
	  Boot ID:                    f12572f6-ff4d-40a6-9357-635dd9f0fba2
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.22
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-851000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 kube-apiserver-kubernetes-upgrade-851000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-851000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-kubernetes-upgrade-851000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 25s              kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s              kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25s              kubelet  Node kubernetes-upgrade-851000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s              kubelet  Node kubernetes-upgrade-851000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s              kubelet  Node kubernetes-upgrade-851000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24s              kubelet  Node kubernetes-upgrade-851000 status is now: NodeReady
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 9s)  kubelet  Node kubernetes-upgrade-851000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 9s)  kubelet  Node kubernetes-upgrade-851000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)  kubelet  Node kubernetes-upgrade-851000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000082] FS-Cache: O-key=[8] '8bf7dd0500000000'
	[  +0.000052] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000064] FS-Cache: N-cookie d=000000008b244b7a{9p.inode} n=00000000ca251887
	[  +0.000041] FS-Cache: N-key=[8] '8bf7dd0500000000'
	[  +0.003023] FS-Cache: Duplicate cookie detected
	[  +0.000102] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000051] FS-Cache: O-cookie d=000000008b244b7a{9p.inode} n=0000000014bd8725
	[  +0.000049] FS-Cache: O-key=[8] '8bf7dd0500000000'
	[  +0.000037] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000040] FS-Cache: N-cookie d=000000008b244b7a{9p.inode} n=000000003362e347
	[  +0.000110] FS-Cache: N-key=[8] '8bf7dd0500000000'
	[  +2.952968] FS-Cache: Duplicate cookie detected
	[  +0.000126] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000166] FS-Cache: O-cookie d=000000008b244b7a{9p.inode} n=00000000b7e6f460
	[  +0.000048] FS-Cache: O-key=[8] '8af7dd0500000000'
	[  +0.000095] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000082] FS-Cache: N-cookie d=000000008b244b7a{9p.inode} n=0000000064af9414
	[  +0.000093] FS-Cache: N-key=[8] '8af7dd0500000000'
	[  +0.422970] FS-Cache: Duplicate cookie detected
	[  +0.000057] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000072] FS-Cache: O-cookie d=000000008b244b7a{9p.inode} n=00000000ca9d0d28
	[  +0.000067] FS-Cache: O-key=[8] '94f7dd0500000000'
	[  +0.000047] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000073] FS-Cache: N-cookie d=000000008b244b7a{9p.inode} n=00000000b4b4ddba
	[  +0.000031] FS-Cache: N-key=[8] '94f7dd0500000000'
	
	* 
	* ==> etcd [04f1a9af433d] <==
	* {"level":"info","ts":"2023-01-28T04:18:39.553Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T04:18:39.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-01-28T04:18:39.553Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-01-28T04:18:39.553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T04:18:39.553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T04:18:41.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-28T04:18:41.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-28T04:18:41.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-01-28T04:18:41.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-01-28T04:18:41.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T04:18:41.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-01-28T04:18:41.037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T04:18:41.038Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-851000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T04:18:41.038Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:18:41.038Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:18:41.039Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T04:18:41.039Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-28T04:18:41.040Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-28T04:18:41.040Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T04:18:44.179Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-28T04:18:44.179Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-851000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-01-28T04:18:44.214Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-01-28T04:18:44.215Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T04:18:44.217Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T04:18:44.217Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-851000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [ef9f3885791b] <==
	* {"level":"info","ts":"2023-01-28T04:18:49.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-28T04:18:49.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T04:18:49.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-01-28T04:18:49.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-28T04:18:49.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-01-28T04:18:49.342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-28T04:18:49.343Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-851000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T04:18:49.343Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:18:49.343Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:18:49.343Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T04:18:49.343Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-28T04:18:49.344Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-28T04:18:49.344Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T04:18:50.904Z","caller":"traceutil/trace.go:171","msg":"trace[497488621] transaction","detail":"{read_only:false; number_of_response:1; response_revision:318; }","duration":"190.280384ms","start":"2023-01-28T04:18:50.713Z","end":"2023-01-28T04:18:50.904Z","steps":["trace[497488621] 'process raft request'  (duration: 190.150525ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-28T04:18:50.904Z","caller":"traceutil/trace.go:171","msg":"trace[580556889] linearizableReadLoop","detail":"{readStateIndex:336; appliedIndex:336; }","duration":"176.965071ms","start":"2023-01-28T04:18:50.727Z","end":"2023-01-28T04:18:50.904Z","steps":["trace[580556889] 'read index received'  (duration: 176.960636ms)","trace[580556889] 'applied index is now lower than readState.Index'  (duration: 3.83µs)"],"step_count":2}
	{"level":"warn","ts":"2023-01-28T04:18:50.904Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"177.126463ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-851000\" ","response":"range_response_count:1 size:7581"}
	{"level":"info","ts":"2023-01-28T04:18:50.904Z","caller":"traceutil/trace.go:171","msg":"trace[542729103] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-kubernetes-upgrade-851000; range_end:; response_count:1; response_revision:318; }","duration":"177.390093ms","start":"2023-01-28T04:18:50.727Z","end":"2023-01-28T04:18:50.904Z","steps":["trace[542729103] 'agreement among raft nodes before linearized reading'  (duration: 177.073221ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:18:50.906Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.548185ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/kubernetes-upgrade-851000\" ","response":"range_response_count:1 size:604"}
	{"level":"info","ts":"2023-01-28T04:18:50.906Z","caller":"traceutil/trace.go:171","msg":"trace[1444339706] range","detail":"{range_begin:/registry/leases/kube-node-lease/kubernetes-upgrade-851000; range_end:; response_count:1; response_revision:318; }","duration":"176.821503ms","start":"2023-01-28T04:18:50.730Z","end":"2023-01-28T04:18:50.906Z","steps":["trace[1444339706] 'agreement among raft nodes before linearized reading'  (duration: 176.503683ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:18:50.907Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"148.757399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/kube-apiserver-blejvn65566x5ufuc6y5fvgegq\" ","response":"range_response_count:1 size:687"}
	{"level":"info","ts":"2023-01-28T04:18:50.907Z","caller":"traceutil/trace.go:171","msg":"trace[632233887] range","detail":"{range_begin:/registry/leases/kube-system/kube-apiserver-blejvn65566x5ufuc6y5fvgegq; range_end:; response_count:1; response_revision:318; }","duration":"148.877821ms","start":"2023-01-28T04:18:50.758Z","end":"2023-01-28T04:18:50.907Z","steps":["trace[632233887] 'agreement among raft nodes before linearized reading'  (duration: 148.808952ms)"],"step_count":1}
	{"level":"warn","ts":"2023-01-28T04:18:50.907Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"176.880641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/kubernetes-upgrade-851000\" ","response":"range_response_count:1 size:715"}
	{"level":"info","ts":"2023-01-28T04:18:50.907Z","caller":"traceutil/trace.go:171","msg":"trace[1917305610] range","detail":"{range_begin:/registry/csinodes/kubernetes-upgrade-851000; range_end:; response_count:1; response_revision:318; }","duration":"176.974685ms","start":"2023-01-28T04:18:50.730Z","end":"2023-01-28T04:18:50.907Z","steps":["trace[1917305610] 'agreement among raft nodes before linearized reading'  (duration: 176.863763ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-28T04:18:52.738Z","caller":"traceutil/trace.go:171","msg":"trace[1808164529] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"136.84562ms","start":"2023-01-28T04:18:52.602Z","end":"2023-01-28T04:18:52.738Z","steps":["trace[1808164529] 'process raft request'  (duration: 136.600461ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-28T04:18:54.332Z","caller":"traceutil/trace.go:171","msg":"trace[1194639046] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"132.488728ms","start":"2023-01-28T04:18:54.200Z","end":"2023-01-28T04:18:54.332Z","steps":["trace[1194639046] 'process raft request'  (duration: 132.388014ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:18:56 up  1:18,  0 users,  load average: 5.08, 2.55, 1.82
	Linux kubernetes-upgrade-851000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [02a0fb31be0e] <==
	* I0128 04:18:50.709305       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0128 04:18:50.709309       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0128 04:18:50.709323       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0128 04:18:50.709332       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0128 04:18:50.709475       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0128 04:18:50.709806       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0128 04:18:50.709934       1 apf_controller.go:361] Starting API Priority and Fairness config controller
	I0128 04:18:50.815046       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0128 04:18:50.815160       1 shared_informer.go:280] Caches are synced for configmaps
	I0128 04:18:50.823311       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0128 04:18:50.823477       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0128 04:18:50.865975       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0128 04:18:50.908938       1 cache.go:39] Caches are synced for autoregister controller
	I0128 04:18:50.909562       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0128 04:18:50.910336       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0128 04:18:50.913319       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0128 04:18:50.913948       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0128 04:18:50.926443       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0128 04:18:51.432332       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0128 04:18:51.712388       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0128 04:18:52.446880       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0128 04:18:52.458368       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0128 04:18:52.492849       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0128 04:18:52.513602       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0128 04:18:52.522988       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [725213254f27] <==
	* W0128 04:18:44.185447       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0128 04:18:44.185511       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0128 04:18:44.185698       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I0128 04:18:44.218821       1 controller.go:211] Shutting down kubernetes service endpoint reconciler
	
	* 
	* ==> kube-controller-manager [be76cc7e4fa6] <==
	* I0128 04:18:52.825904       1 expand_controller.go:340] Starting expand controller
	I0128 04:18:52.826532       1 shared_informer.go:273] Waiting for caches to sync for expand
	I0128 04:18:52.832334       1 controllermanager.go:622] Started "serviceaccount"
	I0128 04:18:52.832461       1 serviceaccounts_controller.go:111] Starting service account controller
	I0128 04:18:52.832468       1 shared_informer.go:273] Waiting for caches to sync for service account
	I0128 04:18:52.837614       1 controllermanager.go:622] Started "daemonset"
	I0128 04:18:52.837919       1 daemon_controller.go:265] Starting daemon sets controller
	I0128 04:18:52.837968       1 shared_informer.go:273] Waiting for caches to sync for daemon sets
	I0128 04:18:52.842941       1 controllermanager.go:622] Started "statefulset"
	I0128 04:18:52.844591       1 stateful_set.go:152] Starting stateful set controller
	I0128 04:18:52.844605       1 shared_informer.go:273] Waiting for caches to sync for stateful set
	I0128 04:18:52.846559       1 controllermanager.go:622] Started "csrsigning"
	I0128 04:18:52.846680       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving"
	I0128 04:18:52.846688       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0128 04:18:52.846709       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
	I0128 04:18:52.846717       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0128 04:18:52.846744       1 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
	I0128 04:18:52.846790       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0128 04:18:52.846818       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I0128 04:18:52.846828       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0128 04:18:52.846840       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0128 04:18:52.846903       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0128 04:18:52.846980       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0128 04:18:52.847017       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0128 04:18:52.870922       1 shared_informer.go:280] Caches are synced for tokens
	
	* 
	* ==> kube-controller-manager [f43c8a49506d] <==
	* I0128 04:18:40.448142       1 serving.go:348] Generated self-signed cert in-memory
	I0128 04:18:40.803335       1 controllermanager.go:182] Version: v1.26.1
	I0128 04:18:40.803391       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:18:40.804513       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0128 04:18:40.804636       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0128 04:18:40.804710       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 04:18:40.804923       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-scheduler [6094805b6c48] <==
	* I0128 04:18:48.551178       1 serving.go:348] Generated self-signed cert in-memory
	I0128 04:18:50.937524       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0128 04:18:50.937630       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:18:50.940557       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0128 04:18:50.940651       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0128 04:18:50.940662       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0128 04:18:50.940680       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 04:18:50.941208       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0128 04:18:50.943412       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 04:18:50.942166       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0128 04:18:50.955258       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0128 04:18:51.040853       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	I0128 04:18:51.043732       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 04:18:51.056423       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kube-scheduler [b99ee961a6b1] <==
	* I0128 04:18:40.214212       1 serving.go:348] Generated self-signed cert in-memory
	I0128 04:18:42.549352       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0128 04:18:42.549373       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 04:18:42.563042       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0128 04:18:42.563155       1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0128 04:18:42.563192       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0128 04:18:42.563200       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 04:18:42.563217       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0128 04:18:42.563222       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0128 04:18:42.575000       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0128 04:18:42.575063       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 04:18:42.663487       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0128 04:18:42.663487       1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
	I0128 04:18:42.663517       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 04:18:44.181133       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0128 04:18:44.181189       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0128 04:18:44.181433       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0128 04:18:44.181607       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 04:13:52 UTC, end at Sat 2023-01-28 04:18:58 UTC. --
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.211766   13022 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28a94e6395ae964089a1e68d0c13336a-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-851000\" (UID: \"28a94e6395ae964089a1e68d0c13336a\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.211792   13022 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a946d0f0987b220800d65c9930e036ff-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-851000\" (UID: \"a946d0f0987b220800d65c9930e036ff\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.211816   13022 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a946d0f0987b220800d65c9930e036ff-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-851000\" (UID: \"a946d0f0987b220800d65c9930e036ff\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.211844   13022 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28a94e6395ae964089a1e68d0c13336a-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-851000\" (UID: \"28a94e6395ae964089a1e68d0c13336a\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.211861   13022 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28a94e6395ae964089a1e68d0c13336a-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-851000\" (UID: \"28a94e6395ae964089a1e68d0c13336a\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.211876   13022 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/de123736b427cad97a051f4a2b127d87-etcd-data\") pod \"etcd-kubernetes-upgrade-851000\" (UID: \"de123736b427cad97a051f4a2b127d87\") " pod="kube-system/etcd-kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.211894   13022 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a946d0f0987b220800d65c9930e036ff-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-851000\" (UID: \"a946d0f0987b220800d65c9930e036ff\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.211927   13022 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28a94e6395ae964089a1e68d0c13336a-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-851000\" (UID: \"28a94e6395ae964089a1e68d0c13336a\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.226345   13022 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: E0128 04:18:47.227273   13022 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.432187   13022 scope.go:115] "RemoveContainer" containerID="b99ee961a6b12e546222cb770157bc46432c4b677d857dee758646546d0f34de"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.442860   13022 scope.go:115] "RemoveContainer" containerID="04f1a9af433dc3d4e25dce762c80eb715e495aad651e385bcf67c27dd251d5fa"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.453593   13022 scope.go:115] "RemoveContainer" containerID="725213254f27620ef139d620c82d4322c6e9fe72c5d6ab0ebf984c6dd5962477"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.461693   13022 scope.go:115] "RemoveContainer" containerID="f43c8a49506d393f43dc049296de31ce75b0281fdff073c441870af1f612cba0"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: E0128 04:18:47.471095   13022 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-851000?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:47.641555   13022 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: E0128 04:18:47.641935   13022 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-851000"
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: W0128 04:18:47.838441   13022 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-851000&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 04:18:47 kubernetes-upgrade-851000 kubelet[13022]: E0128 04:18:47.838526   13022 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-851000&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 04:18:48 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:48.459356   13022 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-851000"
	Jan 28 04:18:50 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:50.855380   13022 apiserver.go:52] "Watching apiserver"
	Jan 28 04:18:50 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:50.921608   13022 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-851000"
	Jan 28 04:18:50 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:50.921837   13022 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-851000"
	Jan 28 04:18:50 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:50.969059   13022 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 28 04:18:51 kubernetes-upgrade-851000 kubelet[13022]: I0128 04:18:51.045255   13022 reconciler.go:41] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-851000 -n kubernetes-upgrade-851000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-851000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-851000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-851000 describe pod storage-provisioner: exit status 1 (55.867361ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-851000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-851000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-851000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-851000: (2.660808203s)
--- FAIL: TestKubernetesUpgrade (564.57s)

                                                
                                    
x
+
TestMissingContainerUpgrade (53.57s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2973974326.exe start -p missing-upgrade-249000 --memory=2200 --driver=docker 
E0127 20:08:44.677311    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2973974326.exe start -p missing-upgrade-249000 --memory=2200 --driver=docker : exit status 78 (39.111396764s)

                                                
                                                
-- stdout --
	* [missing-upgrade-249000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-249000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-249000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 21.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 52.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 151.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 204.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 229.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 261.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 294.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 408.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 432.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:09:01.540143369 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-249000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:09:20.927144488 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2973974326.exe start -p missing-upgrade-249000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2973974326.exe start -p missing-upgrade-249000 --memory=2200 --driver=docker : exit status 70 (4.024032004s)

                                                
                                                
-- stdout --
	* [missing-upgrade-249000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-249000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-249000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2973974326.exe start -p missing-upgrade-249000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2973974326.exe start -p missing-upgrade-249000 --memory=2200 --driver=docker : exit status 70 (4.17425114s)

                                                
                                                
-- stdout --
	* [missing-upgrade-249000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-249000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-249000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-01-27 20:09:33.921905 -0800 PST m=+2362.912734029
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-249000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-249000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f5bc4ace2facc51092fef884498c004bb102016dbc2960b01421f7ecbf5de66e",
	        "Created": "2023-01-28T04:09:09.715121916Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 176641,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:09:09.946416008Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/f5bc4ace2facc51092fef884498c004bb102016dbc2960b01421f7ecbf5de66e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f5bc4ace2facc51092fef884498c004bb102016dbc2960b01421f7ecbf5de66e/hostname",
	        "HostsPath": "/var/lib/docker/containers/f5bc4ace2facc51092fef884498c004bb102016dbc2960b01421f7ecbf5de66e/hosts",
	        "LogPath": "/var/lib/docker/containers/f5bc4ace2facc51092fef884498c004bb102016dbc2960b01421f7ecbf5de66e/f5bc4ace2facc51092fef884498c004bb102016dbc2960b01421f7ecbf5de66e-json.log",
	        "Name": "/missing-upgrade-249000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-249000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8f70eebd815a41e0d9ebfbe4655e915941f2fd6c4026f5ee142bac11e11ebc00-init/diff:/var/lib/docker/overlay2/575040c1cd6fa9b064a258db2eb02fcbe8cdb3384cdb47f19b4234e5ba9b4a97/diff:/var/lib/docker/overlay2/1e8fb5f1d948945df3ddce71e758d3da8d8118858f2e5a08df9464ee9ebb2037/diff:/var/lib/docker/overlay2/c544960136dbc5d092fae5313d527643651c6e2a65704463efaa4358ccae1331/diff:/var/lib/docker/overlay2/43f564c70597689a3ea632103df9b1a253ffd27aa3437620374f1177a296a1eb/diff:/var/lib/docker/overlay2/d16fde0b6bdeaf4261faee7fc9e42341173eb434cc833954ddb2277f468c37f0/diff:/var/lib/docker/overlay2/8285e91d760eaef85d2fb3c28000c3f0709f50513ebe89cf374288f97135c044/diff:/var/lib/docker/overlay2/1e968842ba0ce46f4ff6359b3e5a21c70757c393eaad21d62f2266ca03ecf309/diff:/var/lib/docker/overlay2/dc8d9c03061beef86986bd597b0fd68f381f214529929dbef2fa75e7ae981eab/diff:/var/lib/docker/overlay2/75498eeada407023a5fd32c0335558b546de1882e522b699aad1f475cc23d360/diff:/var/lib/docker/overlay2/30fe2e
0418914eba58711b96964efe6c7b51f633464f31a15cc86cb6d66dc918/diff:/var/lib/docker/overlay2/41574202d4243f42c771c64dec875284f984561185dd87461ded79e989fe0012/diff:/var/lib/docker/overlay2/2486a32b89da283f9ae514f00dfa4f50bb6300e2f959c3637d982fdf023db0e4/diff:/var/lib/docker/overlay2/c573ab199116f10bd11a3f57b93275ba9b230f9c5f1ce297dbbf8a9644a2784d/diff:/var/lib/docker/overlay2/c3d71f26de8fc41a26f47958ab3b388a7367f8d0e96e143836e58029c9b3afae/diff:/var/lib/docker/overlay2/8462333bc4a29ccf2ca4426977034439f352217402c29f15fecad093927e849c/diff:/var/lib/docker/overlay2/922a17c47d339ea250e98f5fcf695096b4a16e48818603d8905123bd77cedb56/diff:/var/lib/docker/overlay2/dfacd1805d008155c4ad90ccfc042aa2ec49c7407b078f228b157fbcb3a0469c/diff:/var/lib/docker/overlay2/bc33364f21f93e8d8589c294e5b7e688a319087be4d62cdfa8f6c73ea9101544/diff:/var/lib/docker/overlay2/633cfb70aa09484c4007a73f11539673a8cbd06a79b085d7d6a728e6d393aa2b/diff:/var/lib/docker/overlay2/03134537a7343ec3b51c98c4ea881891568edb58af0f5710b2fa8786f4840bc2/diff:/var/lib/d
ocker/overlay2/76c005bad483f262fdc488a083cf470dfcbc09f18c10bd5d71b64207a9e8bb13/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8f70eebd815a41e0d9ebfbe4655e915941f2fd6c4026f5ee142bac11e11ebc00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8f70eebd815a41e0d9ebfbe4655e915941f2fd6c4026f5ee142bac11e11ebc00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8f70eebd815a41e0d9ebfbe4655e915941f2fd6c4026f5ee142bac11e11ebc00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-249000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-249000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-249000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-249000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-249000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "282d31d7191336bde5a2970563da5ae291d251f88814b54fa8a931505071123e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52870"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52871"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52872"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/282d31d71913",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "c5d2855ace8d878928b24fe377dbe429935075f3febe87df245c451ac045717e",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "db2808adae70f1543c0b2142988ae45ef7eeb96c9849cf9eae7df9ab6bb57a0e",
	                    "EndpointID": "c5d2855ace8d878928b24fe377dbe429935075f3febe87df245c451ac045717e",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-249000 -n missing-upgrade-249000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-249000 -n missing-upgrade-249000: exit status 6 (387.842513ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:09:34.398620   15210 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-249000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-249000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-249000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-249000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-249000: (2.365112337s)
--- FAIL: TestMissingContainerUpgrade (53.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3159878869.exe start -p stopped-upgrade-832000 --memory=2200 --vm-driver=docker 
E0127 20:11:28.390725    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3159878869.exe start -p stopped-upgrade-832000 --memory=2200 --vm-driver=docker : exit status 70 (37.992498116s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-832000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1360945122
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:11:18.365425339 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-832000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:11:37.857506733 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-832000", then "minikube start -p stopped-upgrade-832000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 37.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 78.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 138.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 256.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 315.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 370.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 419.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 528.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:11:37.857506733 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3159878869.exe start -p stopped-upgrade-832000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3159878869.exe start -p stopped-upgrade-832000 --memory=2200 --vm-driver=docker : exit status 70 (4.36981505s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-832000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig196562614
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-832000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3159878869.exe start -p stopped-upgrade-832000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.3159878869.exe start -p stopped-upgrade-832000 --memory=2200 --vm-driver=docker : exit status 70 (4.470957556s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-832000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1698319596
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-832000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (49.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (253.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-720000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0127 20:24:13.109021    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-720000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m13.343695135s)

                                                
                                                
-- stdout --
	* [old-k8s-version-720000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-720000 in cluster old-k8s-version-720000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 20:23:49.162742   21770 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:23:49.162906   21770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:23:49.162912   21770 out.go:309] Setting ErrFile to fd 2...
	I0127 20:23:49.162916   21770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:23:49.163049   21770 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 20:23:49.163595   21770 out.go:303] Setting JSON to false
	I0127 20:23:49.185480   21770 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5003,"bootTime":1674874826,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 20:23:49.185625   21770 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:23:49.208173   21770 out.go:177] * [old-k8s-version-720000] minikube v1.28.0 on Darwin 13.2
	I0127 20:23:49.250030   21770 notify.go:220] Checking for updates...
	I0127 20:23:49.271684   21770 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:23:49.345922   21770 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:23:49.420790   21770 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:23:49.494877   21770 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:23:49.536673   21770 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 20:23:49.578497   21770 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:23:49.600142   21770 config.go:180] Loaded profile config "kubenet-259000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:23:49.600210   21770 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:23:49.666290   21770 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 20:23:49.666477   21770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:23:49.822651   21770 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-28 04:23:49.722093601 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:23:49.852622   21770 out.go:177] * Using the docker driver based on user configuration
	I0127 20:23:49.862595   21770 start.go:296] selected driver: docker
	I0127 20:23:49.862611   21770 start.go:840] validating driver "docker" against <nil>
	I0127 20:23:49.862644   21770 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:23:49.865368   21770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:23:50.037602   21770 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:70 SystemTime:2023-01-28 04:23:49.922790337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:23:50.037711   21770 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0127 20:23:50.037866   21770 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 20:23:50.076503   21770 out.go:177] * Using Docker Desktop driver with root privileges
	I0127 20:23:50.098455   21770 cni.go:84] Creating CNI manager for ""
	I0127 20:23:50.098494   21770 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 20:23:50.098512   21770 start_flags.go:319] config:
	{Name:old-k8s-version-720000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-720000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:23:50.142312   21770 out.go:177] * Starting control plane node old-k8s-version-720000 in cluster old-k8s-version-720000
	I0127 20:23:50.163367   21770 cache.go:120] Beginning downloading kic base image for docker with docker
	I0127 20:23:50.185369   21770 out.go:177] * Pulling base image ...
	I0127 20:23:50.227674   21770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:23:50.227762   21770 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0127 20:23:50.227796   21770 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0127 20:23:50.227830   21770 cache.go:57] Caching tarball of preloaded images
	I0127 20:23:50.228021   21770 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:23:50.228039   21770 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0127 20:23:50.228898   21770 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/config.json ...
	I0127 20:23:50.229034   21770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/config.json: {Name:mk2c595e87b88b1e259e12d7369d76203aa9366d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:23:50.289430   21770 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0127 20:23:50.289448   21770 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0127 20:23:50.289466   21770 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:23:50.289579   21770 start.go:364] acquiring machines lock for old-k8s-version-720000: {Name:mk4c4e23ea55570fd8854da14e914c261c97da33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:23:50.289731   21770 start.go:368] acquired machines lock for "old-k8s-version-720000" in 139.702µs
	I0127 20:23:50.289762   21770 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-720000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-720000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:23:50.289846   21770 start.go:125] createHost starting for "" (driver="docker")
	I0127 20:23:50.364488   21770 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0127 20:23:50.364897   21770 start.go:159] libmachine.API.Create for "old-k8s-version-720000" (driver="docker")
	I0127 20:23:50.364953   21770 client.go:168] LocalClient.Create starting
	I0127 20:23:50.365152   21770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem
	I0127 20:23:50.365292   21770 main.go:141] libmachine: Decoding PEM data...
	I0127 20:23:50.365331   21770 main.go:141] libmachine: Parsing certificate...
	I0127 20:23:50.365480   21770 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem
	I0127 20:23:50.365561   21770 main.go:141] libmachine: Decoding PEM data...
	I0127 20:23:50.365580   21770 main.go:141] libmachine: Parsing certificate...
	I0127 20:23:50.366430   21770 cli_runner.go:164] Run: docker network inspect old-k8s-version-720000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0127 20:23:50.427403   21770 cli_runner.go:211] docker network inspect old-k8s-version-720000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0127 20:23:50.427510   21770 network_create.go:281] running [docker network inspect old-k8s-version-720000] to gather additional debugging logs...
	I0127 20:23:50.427528   21770 cli_runner.go:164] Run: docker network inspect old-k8s-version-720000
	W0127 20:23:50.488474   21770 cli_runner.go:211] docker network inspect old-k8s-version-720000 returned with exit code 1
	I0127 20:23:50.488509   21770 network_create.go:284] error running [docker network inspect old-k8s-version-720000]: docker network inspect old-k8s-version-720000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-720000
	I0127 20:23:50.488524   21770 network_create.go:286] output of [docker network inspect old-k8s-version-720000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-720000
	
	** /stderr **
	I0127 20:23:50.488607   21770 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0127 20:23:50.552814   21770 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 20:23:50.554295   21770 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 20:23:50.555850   21770 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0127 20:23:50.556143   21770 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012439b0}
	I0127 20:23:50.556155   21770 network_create.go:123] attempt to create docker network old-k8s-version-720000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0127 20:23:50.556226   21770 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-720000 old-k8s-version-720000
	I0127 20:23:50.656461   21770 network_create.go:107] docker network old-k8s-version-720000 192.168.76.0/24 created
	I0127 20:23:50.656500   21770 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-720000" container
	I0127 20:23:50.656625   21770 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0127 20:23:50.719599   21770 cli_runner.go:164] Run: docker volume create old-k8s-version-720000 --label name.minikube.sigs.k8s.io=old-k8s-version-720000 --label created_by.minikube.sigs.k8s.io=true
	I0127 20:23:50.788912   21770 oci.go:103] Successfully created a docker volume old-k8s-version-720000
	I0127 20:23:50.789073   21770 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-720000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-720000 --entrypoint /usr/bin/test -v old-k8s-version-720000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -d /var/lib
	I0127 20:23:51.424053   21770 oci.go:107] Successfully prepared a docker volume old-k8s-version-720000
	I0127 20:23:51.424089   21770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:23:51.424105   21770 kic.go:190] Starting extracting preloaded images to volume ...
	I0127 20:23:51.424217   21770 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-720000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir
	I0127 20:23:59.244033   21770 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-720000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a -I lz4 -xf /preloaded.tar -C /extractDir: (7.819787135s)
	I0127 20:23:59.244056   21770 kic.go:199] duration metric: took 7.819986 seconds to extract preloaded images to volume
	I0127 20:23:59.244169   21770 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0127 20:23:59.400876   21770 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-720000 --name old-k8s-version-720000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-720000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-720000 --network old-k8s-version-720000 --ip 192.168.76.2 --volume old-k8s-version-720000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a
	I0127 20:23:59.791368   21770 cli_runner.go:164] Run: docker container inspect old-k8s-version-720000 --format={{.State.Running}}
	I0127 20:23:59.865972   21770 cli_runner.go:164] Run: docker container inspect old-k8s-version-720000 --format={{.State.Status}}
	I0127 20:23:59.948076   21770 cli_runner.go:164] Run: docker exec old-k8s-version-720000 stat /var/lib/dpkg/alternatives/iptables
	I0127 20:24:00.081001   21770 oci.go:144] the created container "old-k8s-version-720000" has a running status.
	I0127 20:24:00.081053   21770 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa...
	I0127 20:24:00.228467   21770 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0127 20:24:00.351029   21770 cli_runner.go:164] Run: docker container inspect old-k8s-version-720000 --format={{.State.Status}}
	I0127 20:24:00.415544   21770 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0127 20:24:00.415566   21770 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-720000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0127 20:24:00.527632   21770 cli_runner.go:164] Run: docker container inspect old-k8s-version-720000 --format={{.State.Status}}
	I0127 20:24:00.591442   21770 machine.go:88] provisioning docker machine ...
	I0127 20:24:00.591485   21770 ubuntu.go:169] provisioning hostname "old-k8s-version-720000"
	I0127 20:24:00.591569   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:00.653908   21770 main.go:141] libmachine: Using SSH client type: native
	I0127 20:24:00.654121   21770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55059 <nil> <nil>}
	I0127 20:24:00.654139   21770 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720000 && echo "old-k8s-version-720000" | sudo tee /etc/hostname
	I0127 20:24:00.796898   21770 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720000
	
	I0127 20:24:00.796998   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:00.858348   21770 main.go:141] libmachine: Using SSH client type: native
	I0127 20:24:00.858519   21770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55059 <nil> <nil>}
	I0127 20:24:00.858533   21770 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 20:24:00.991192   21770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:24:00.991223   21770 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3092/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3092/.minikube}
	I0127 20:24:00.991243   21770 ubuntu.go:177] setting up certificates
	I0127 20:24:00.991256   21770 provision.go:83] configureAuth start
	I0127 20:24:00.991344   21770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-720000
	I0127 20:24:01.054245   21770 provision.go:138] copyHostCerts
	I0127 20:24:01.054352   21770 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem, removing ...
	I0127 20:24:01.054363   21770 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem
	I0127 20:24:01.054461   21770 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem (1078 bytes)
	I0127 20:24:01.054678   21770 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem, removing ...
	I0127 20:24:01.054685   21770 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem
	I0127 20:24:01.054746   21770 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem (1123 bytes)
	I0127 20:24:01.054904   21770 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem, removing ...
	I0127 20:24:01.054910   21770 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem
	I0127 20:24:01.054968   21770 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem (1679 bytes)
	I0127 20:24:01.055085   21770 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-720000]
	I0127 20:24:01.248129   21770 provision.go:172] copyRemoteCerts
	I0127 20:24:01.248237   21770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 20:24:01.248307   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:01.311671   21770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:24:01.408228   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 20:24:01.428441   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 20:24:01.447052   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 20:24:01.464765   21770 provision.go:86] duration metric: configureAuth took 473.498315ms
	I0127 20:24:01.464778   21770 ubuntu.go:193] setting minikube options for container-runtime
	I0127 20:24:01.464926   21770 config.go:180] Loaded profile config "old-k8s-version-720000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0127 20:24:01.464996   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:01.525494   21770 main.go:141] libmachine: Using SSH client type: native
	I0127 20:24:01.525654   21770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55059 <nil> <nil>}
	I0127 20:24:01.525670   21770 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 20:24:01.658469   21770 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0127 20:24:01.658482   21770 ubuntu.go:71] root file system type: overlay
	I0127 20:24:01.658652   21770 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 20:24:01.658741   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:01.719450   21770 main.go:141] libmachine: Using SSH client type: native
	I0127 20:24:01.719603   21770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55059 <nil> <nil>}
	I0127 20:24:01.719657   21770 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 20:24:01.863168   21770 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 20:24:01.863277   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:01.925154   21770 main.go:141] libmachine: Using SSH client type: native
	I0127 20:24:01.925304   21770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55059 <nil> <nil>}
	I0127 20:24:01.925318   21770 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 20:24:02.545827   21770 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-12-15 22:25:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 04:24:01.860496385 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0127 20:24:02.545856   21770 machine.go:91] provisioned docker machine in 1.954402139s
	I0127 20:24:02.545863   21770 client.go:171] LocalClient.Create took 12.180951756s
	I0127 20:24:02.545884   21770 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-720000" took 12.18104299s
	I0127 20:24:02.545893   21770 start.go:300] post-start starting for "old-k8s-version-720000" (driver="docker")
	I0127 20:24:02.545898   21770 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 20:24:02.545984   21770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 20:24:02.546048   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:02.609040   21770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:24:02.704416   21770 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 20:24:02.708845   21770 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 20:24:02.708867   21770 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 20:24:02.708874   21770 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 20:24:02.708881   21770 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0127 20:24:02.708890   21770 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/addons for local assets ...
	I0127 20:24:02.709018   21770 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/files for local assets ...
	I0127 20:24:02.709224   21770 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> 44062.pem in /etc/ssl/certs
	I0127 20:24:02.709433   21770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 20:24:02.718337   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /etc/ssl/certs/44062.pem (1708 bytes)
	I0127 20:24:02.739031   21770 start.go:303] post-start completed in 193.128895ms
	I0127 20:24:02.739544   21770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-720000
	I0127 20:24:02.815536   21770 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/config.json ...
	I0127 20:24:02.816038   21770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 20:24:02.816100   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:02.878932   21770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:24:02.971424   21770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 20:24:02.978151   21770 start.go:128] duration metric: createHost completed in 12.688350135s
	I0127 20:24:02.978177   21770 start.go:83] releasing machines lock for "old-k8s-version-720000", held for 12.688489506s
	I0127 20:24:02.978259   21770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-720000
	I0127 20:24:03.045649   21770 ssh_runner.go:195] Run: cat /version.json
	I0127 20:24:03.045663   21770 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0127 20:24:03.045718   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:03.045771   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:03.112021   21770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:24:03.112471   21770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55059 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:24:03.201008   21770 ssh_runner.go:195] Run: systemctl --version
	I0127 20:24:03.401290   21770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 20:24:03.406874   21770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 20:24:03.429302   21770 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 20:24:03.429388   21770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0127 20:24:03.445791   21770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0127 20:24:03.454478   21770 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0127 20:24:03.454496   21770 start.go:472] detecting cgroup driver to use...
	I0127 20:24:03.454512   21770 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 20:24:03.454634   21770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:24:03.469814   21770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0127 20:24:03.479984   21770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 20:24:03.489258   21770 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 20:24:03.489331   21770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 20:24:03.498497   21770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:24:03.508631   21770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 20:24:03.518304   21770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:24:03.530883   21770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 20:24:03.541700   21770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 20:24:03.553663   21770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 20:24:03.562722   21770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 20:24:03.573303   21770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:24:03.646668   21770 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 20:24:03.731310   21770 start.go:472] detecting cgroup driver to use...
	I0127 20:24:03.731332   21770 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 20:24:03.731406   21770 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 20:24:03.745036   21770 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0127 20:24:03.745118   21770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:24:03.758565   21770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:24:03.777738   21770 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 20:24:03.855644   21770 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 20:24:03.950654   21770 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 20:24:03.950676   21770 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 20:24:03.965655   21770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:24:04.052504   21770 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:24:04.290545   21770 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:24:04.324916   21770 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:24:04.403974   21770 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	I0127 20:24:04.404165   21770 cli_runner.go:164] Run: docker exec -t old-k8s-version-720000 dig +short host.docker.internal
	I0127 20:24:04.520793   21770 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0127 20:24:04.520908   21770 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0127 20:24:04.525340   21770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:24:04.536787   21770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:24:04.600202   21770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:24:04.600285   21770 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:24:04.626019   21770 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0127 20:24:04.626040   21770 docker.go:560] Images already preloaded, skipping extraction
	I0127 20:24:04.626135   21770 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:24:04.651487   21770 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0127 20:24:04.651501   21770 cache_images.go:84] Images are preloaded, skipping loading
	I0127 20:24:04.651595   21770 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 20:24:04.731078   21770 cni.go:84] Creating CNI manager for ""
	I0127 20:24:04.731100   21770 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 20:24:04.731122   21770 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0127 20:24:04.731147   21770 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720000 NodeName:old-k8s-version-720000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0127 20:24:04.731284   21770 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-720000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-720000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 20:24:04.731380   21770 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-720000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-720000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 20:24:04.731459   21770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0127 20:24:04.746900   21770 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 20:24:04.746979   21770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 20:24:04.756217   21770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0127 20:24:04.773459   21770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 20:24:04.788025   21770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0127 20:24:04.802993   21770 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 20:24:04.807474   21770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:24:04.819496   21770 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000 for IP: 192.168.76.2
	I0127 20:24:04.819517   21770 certs.go:186] acquiring lock for shared ca certs: {Name:mk2d86ad31f10478b3fe72eedd54ef2fcd74cf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:24:04.819692   21770 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key
	I0127 20:24:04.819755   21770 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key
	I0127 20:24:04.819794   21770 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/client.key
	I0127 20:24:04.819809   21770 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/client.crt with IP's: []
	I0127 20:24:04.980985   21770 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/client.crt ...
	I0127 20:24:04.981012   21770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/client.crt: {Name:mk68dd2ad24e6314f8fe8caab92fd82a3a339d82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:24:04.981391   21770 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/client.key ...
	I0127 20:24:04.981401   21770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/client.key: {Name:mk3483a7242b7f34e8f083eb64c01cae7b12a023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:24:04.981632   21770 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.key.31bdca25
	I0127 20:24:04.981652   21770 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0127 20:24:05.037465   21770 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.crt.31bdca25 ...
	I0127 20:24:05.037480   21770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.crt.31bdca25: {Name:mk4041fb9ae5bf652149bf45c4ba3f781d852cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:24:05.037799   21770 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.key.31bdca25 ...
	I0127 20:24:05.037812   21770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.key.31bdca25: {Name:mkd6c57a72c8494f3d647ac089cde51d03c2f649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:24:05.038011   21770 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.crt
	I0127 20:24:05.038217   21770 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.key
	I0127 20:24:05.038409   21770 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.key
	I0127 20:24:05.038427   21770 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.crt with IP's: []
	I0127 20:24:05.265297   21770 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.crt ...
	I0127 20:24:05.265310   21770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.crt: {Name:mk716a259d49e2cbddbce67adf764645d38cf40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:24:05.265567   21770 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.key ...
	I0127 20:24:05.265578   21770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.key: {Name:mk1fafbfffed08ad414d5387fc242ba867969c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:24:05.266019   21770 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem (1338 bytes)
	W0127 20:24:05.266073   21770 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406_empty.pem, impossibly tiny 0 bytes
	I0127 20:24:05.266103   21770 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 20:24:05.266145   21770 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem (1078 bytes)
	I0127 20:24:05.266184   21770 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem (1123 bytes)
	I0127 20:24:05.266220   21770 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem (1679 bytes)
	I0127 20:24:05.266305   21770 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem (1708 bytes)
	I0127 20:24:05.266883   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 20:24:05.287476   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 20:24:05.306590   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 20:24:05.325591   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 20:24:05.345282   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 20:24:05.365450   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 20:24:05.384525   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 20:24:05.403060   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 20:24:05.421900   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /usr/share/ca-certificates/44062.pem (1708 bytes)
	I0127 20:24:05.440960   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 20:24:05.460721   21770 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem --> /usr/share/ca-certificates/4406.pem (1338 bytes)
	I0127 20:24:05.482330   21770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 20:24:05.496976   21770 ssh_runner.go:195] Run: openssl version
	I0127 20:24:05.503461   21770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44062.pem && ln -fs /usr/share/ca-certificates/44062.pem /etc/ssl/certs/44062.pem"
	I0127 20:24:05.512426   21770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44062.pem
	I0127 20:24:05.516870   21770 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:36 /usr/share/ca-certificates/44062.pem
	I0127 20:24:05.516949   21770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44062.pem
	I0127 20:24:05.523278   21770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44062.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 20:24:05.532226   21770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 20:24:05.541213   21770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:24:05.546084   21770 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:31 /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:24:05.546148   21770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:24:05.552121   21770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 20:24:05.561182   21770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4406.pem && ln -fs /usr/share/ca-certificates/4406.pem /etc/ssl/certs/4406.pem"
	I0127 20:24:05.570285   21770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4406.pem
	I0127 20:24:05.574347   21770 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:36 /usr/share/ca-certificates/4406.pem
	I0127 20:24:05.574403   21770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4406.pem
	I0127 20:24:05.580264   21770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4406.pem /etc/ssl/certs/51391683.0"
	I0127 20:24:05.590229   21770 kubeadm.go:401] StartCluster: {Name:old-k8s-version-720000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-720000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:24:05.590341   21770 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 20:24:05.616255   21770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 20:24:05.625095   21770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 20:24:05.633510   21770 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0127 20:24:05.633567   21770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:24:05.642051   21770 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 20:24:05.642081   21770 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 20:24:05.695310   21770 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0127 20:24:05.695392   21770 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 20:24:06.021026   21770 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 20:24:06.021114   21770 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 20:24:06.021211   21770 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 20:24:06.269061   21770 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 20:24:06.270000   21770 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 20:24:06.277084   21770 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0127 20:24:06.346906   21770 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 20:24:06.368763   21770 out.go:204]   - Generating certificates and keys ...
	I0127 20:24:06.368870   21770 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 20:24:06.368942   21770 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 20:24:06.488020   21770 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 20:24:06.641792   21770 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0127 20:24:06.719005   21770 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0127 20:24:06.824249   21770 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0127 20:24:06.921035   21770 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0127 20:24:06.921167   21770 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-720000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 20:24:07.060199   21770 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0127 20:24:07.060368   21770 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-720000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0127 20:24:07.255671   21770 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 20:24:07.553469   21770 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 20:24:07.683588   21770 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0127 20:24:07.683678   21770 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 20:24:07.743211   21770 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 20:24:07.978743   21770 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 20:24:08.087483   21770 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 20:24:08.147009   21770 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 20:24:08.147696   21770 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 20:24:08.169163   21770 out.go:204]   - Booting up control plane ...
	I0127 20:24:08.169262   21770 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 20:24:08.169335   21770 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 20:24:08.169432   21770 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 20:24:08.169503   21770 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 20:24:08.169625   21770 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 20:24:48.157580   21770 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0127 20:24:48.158185   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:24:48.158406   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:24:53.159505   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:24:53.159753   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:25:03.159791   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:25:03.159952   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:25:23.160724   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:25:23.160986   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:26:03.161426   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:26:03.161589   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:26:03.161598   21770 kubeadm.go:322] 
	I0127 20:26:03.161670   21770 kubeadm.go:322] Unfortunately, an error has occurred:
	I0127 20:26:03.161713   21770 kubeadm.go:322] 	timed out waiting for the condition
	I0127 20:26:03.161718   21770 kubeadm.go:322] 
	I0127 20:26:03.161742   21770 kubeadm.go:322] This error is likely caused by:
	I0127 20:26:03.161769   21770 kubeadm.go:322] 	- The kubelet is not running
	I0127 20:26:03.161911   21770 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 20:26:03.161923   21770 kubeadm.go:322] 
	I0127 20:26:03.162030   21770 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 20:26:03.162062   21770 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0127 20:26:03.162095   21770 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0127 20:26:03.162104   21770 kubeadm.go:322] 
	I0127 20:26:03.162191   21770 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 20:26:03.162271   21770 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0127 20:26:03.162345   21770 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0127 20:26:03.162392   21770 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0127 20:26:03.162458   21770 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0127 20:26:03.162510   21770 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0127 20:26:03.164923   21770 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0127 20:26:03.165010   21770 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0127 20:26:03.165122   21770 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0127 20:26:03.165226   21770 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 20:26:03.165348   21770 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 20:26:03.165441   21770 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0127 20:26:03.165604   21770 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-720000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-720000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-720000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-720000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 20:26:03.165633   21770 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0127 20:26:03.581337   21770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:26:03.591362   21770 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0127 20:26:03.591423   21770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:26:03.599167   21770 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 20:26:03.599188   21770 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 20:26:03.649483   21770 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0127 20:26:03.649555   21770 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 20:26:03.956323   21770 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 20:26:03.956480   21770 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 20:26:03.956584   21770 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 20:26:04.190671   21770 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 20:26:04.191400   21770 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 20:26:04.198051   21770 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0127 20:26:04.261788   21770 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 20:26:04.283118   21770 out.go:204]   - Generating certificates and keys ...
	I0127 20:26:04.283210   21770 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 20:26:04.283278   21770 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 20:26:04.283359   21770 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 20:26:04.283421   21770 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0127 20:26:04.283486   21770 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 20:26:04.283536   21770 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0127 20:26:04.283621   21770 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0127 20:26:04.283680   21770 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0127 20:26:04.283776   21770 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 20:26:04.283851   21770 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 20:26:04.283885   21770 kubeadm.go:322] [certs] Using the existing "sa" key
	I0127 20:26:04.283935   21770 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 20:26:04.384532   21770 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 20:26:04.551888   21770 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 20:26:04.634941   21770 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 20:26:04.795453   21770 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 20:26:04.797220   21770 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 20:26:04.818771   21770 out.go:204]   - Booting up control plane ...
	I0127 20:26:04.818860   21770 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 20:26:04.818931   21770 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 20:26:04.818989   21770 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 20:26:04.819069   21770 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 20:26:04.819205   21770 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 20:26:44.841284   21770 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0127 20:26:44.842276   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:26:44.842520   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:26:49.846802   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:26:49.847032   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:26:59.851923   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:26:59.852212   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:27:19.856750   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:27:19.857053   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:27:59.859696   21770 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:27:59.859949   21770 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:27:59.859972   21770 kubeadm.go:322] 
	I0127 20:27:59.860017   21770 kubeadm.go:322] Unfortunately, an error has occurred:
	I0127 20:27:59.860078   21770 kubeadm.go:322] 	timed out waiting for the condition
	I0127 20:27:59.860099   21770 kubeadm.go:322] 
	I0127 20:27:59.860171   21770 kubeadm.go:322] This error is likely caused by:
	I0127 20:27:59.860212   21770 kubeadm.go:322] 	- The kubelet is not running
	I0127 20:27:59.860310   21770 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 20:27:59.860316   21770 kubeadm.go:322] 
	I0127 20:27:59.860437   21770 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 20:27:59.860480   21770 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0127 20:27:59.860517   21770 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0127 20:27:59.860529   21770 kubeadm.go:322] 
	I0127 20:27:59.860663   21770 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 20:27:59.860739   21770 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0127 20:27:59.860813   21770 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0127 20:27:59.860859   21770 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0127 20:27:59.860931   21770 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0127 20:27:59.860960   21770 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0127 20:27:59.863546   21770 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0127 20:27:59.863613   21770 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0127 20:27:59.863723   21770 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0127 20:27:59.863802   21770 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 20:27:59.863871   21770 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 20:27:59.863936   21770 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0127 20:27:59.863959   21770 kubeadm.go:403] StartCluster complete in 3m54.227922875s
	I0127 20:27:59.864050   21770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:27:59.887349   21770 logs.go:279] 0 containers: []
	W0127 20:27:59.887363   21770 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:27:59.887434   21770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:27:59.910104   21770 logs.go:279] 0 containers: []
	W0127 20:27:59.910117   21770 logs.go:281] No container was found matching "etcd"
	I0127 20:27:59.910189   21770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:27:59.933486   21770 logs.go:279] 0 containers: []
	W0127 20:27:59.933502   21770 logs.go:281] No container was found matching "coredns"
	I0127 20:27:59.933572   21770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:27:59.956986   21770 logs.go:279] 0 containers: []
	W0127 20:27:59.957002   21770 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:27:59.957071   21770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:27:59.979821   21770 logs.go:279] 0 containers: []
	W0127 20:27:59.979838   21770 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:27:59.979918   21770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:28:00.002283   21770 logs.go:279] 0 containers: []
	W0127 20:28:00.002295   21770 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:28:00.002368   21770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:28:00.026785   21770 logs.go:279] 0 containers: []
	W0127 20:28:00.026799   21770 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:28:00.026859   21770 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:28:00.051436   21770 logs.go:279] 0 containers: []
	W0127 20:28:00.051450   21770 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:28:00.051465   21770 logs.go:124] Gathering logs for Docker ...
	I0127 20:28:00.051472   21770 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:28:00.068913   21770 logs.go:124] Gathering logs for container status ...
	I0127 20:28:00.068928   21770 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:28:02.120845   21770 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051888356s)
	I0127 20:28:02.120955   21770 logs.go:124] Gathering logs for kubelet ...
	I0127 20:28:02.120963   21770 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:28:02.160114   21770 logs.go:124] Gathering logs for dmesg ...
	I0127 20:28:02.160131   21770 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:28:02.173174   21770 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:28:02.173188   21770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:28:02.228630   21770 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0127 20:28:02.228664   21770 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 20:28:02.228704   21770 out.go:239] * 
	* 
	W0127 20:28:02.228865   21770 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 20:28:02.228901   21770 out.go:239] * 
	* 
	W0127 20:28:02.229550   21770 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 20:28:02.294214   21770 out.go:177] 
	W0127 20:28:02.368110   21770 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 20:28:02.368205   21770 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 20:28:02.368242   21770 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 20:28:02.426207   21770 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-720000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-720000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-720000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c",
	        "Created": "2023-01-28T04:23:59.460794108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:23:59.779920163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hosts",
	        "LogPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c-json.log",
	        "Name": "/old-k8s-version-720000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-720000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-720000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-720000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-720000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-720000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bde41110b4751090c526e013b9c06a4cd379268e62e360c4f7771f7880047bf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55061"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55063"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4bde41110b47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-720000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a7d076a4985",
	                        "old-k8s-version-720000"
	                    ],
	                    "NetworkID": "4a101da36ff964d86adf1945f3a9a22581d700864206dd1558c9c4957ae7df32",
	                    "EndpointID": "478be67097d6d7fc644b5283841991783cd14bb90384bfcac470136266a2f598",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 6 (421.274368ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:28:03.004321   22963 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-720000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-720000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (253.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-720000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-720000 create -f testdata/busybox.yaml: exit status 1 (36.059345ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-720000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-720000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-720000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-720000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c",
	        "Created": "2023-01-28T04:23:59.460794108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:23:59.779920163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hosts",
	        "LogPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c-json.log",
	        "Name": "/old-k8s-version-720000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-720000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-720000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-720000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-720000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-720000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bde41110b4751090c526e013b9c06a4cd379268e62e360c4f7771f7880047bf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55061"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55063"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4bde41110b47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-720000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a7d076a4985",
	                        "old-k8s-version-720000"
	                    ],
	                    "NetworkID": "4a101da36ff964d86adf1945f3a9a22581d700864206dd1558c9c4957ae7df32",
	                    "EndpointID": "478be67097d6d7fc644b5283841991783cd14bb90384bfcac470136266a2f598",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 6 (427.594801ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:28:03.530599   22976 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-720000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-720000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-720000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-720000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c",
	        "Created": "2023-01-28T04:23:59.460794108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:23:59.779920163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hosts",
	        "LogPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c-json.log",
	        "Name": "/old-k8s-version-720000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-720000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-720000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-720000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-720000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-720000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bde41110b4751090c526e013b9c06a4cd379268e62e360c4f7771f7880047bf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55061"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55063"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4bde41110b47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-720000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a7d076a4985",
	                        "old-k8s-version-720000"
	                    ],
	                    "NetworkID": "4a101da36ff964d86adf1945f3a9a22581d700864206dd1558c9c4957ae7df32",
	                    "EndpointID": "478be67097d6d7fc644b5283841991783cd14bb90384bfcac470136266a2f598",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 6 (473.077508ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:28:04.065560   22988 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-720000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-720000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-720000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0127 20:28:08.435873    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 20:28:09.155931    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:09.161036    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:09.171148    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:09.193288    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:09.208855    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:09.214018    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:09.225195    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:09.233939    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:09.246135    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:09.286446    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:09.314113    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:09.367012    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:09.476259    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:09.527712    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:09.796433    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:09.848201    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:10.438665    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:10.489086    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:11.720876    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:11.769345    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:14.283109    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:14.330656    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:18.916855    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:28:19.404585    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:19.451578    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:25.375756    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 20:28:29.645486    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:29.692172    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:28:44.728154    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 20:28:50.126306    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:28:50.172487    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:29:12.784546    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:29:14.055423    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:29:31.087309    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:29:31.133154    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-720000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.217282219s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-720000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-720000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-720000 describe deploy/metrics-server -n kube-system: exit status 1 (37.335684ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-720000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-720000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-720000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-720000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c",
	        "Created": "2023-01-28T04:23:59.460794108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:23:59.779920163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hosts",
	        "LogPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c-json.log",
	        "Name": "/old-k8s-version-720000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-720000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-720000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-720000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-720000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-720000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bde41110b4751090c526e013b9c06a4cd379268e62e360c4f7771f7880047bf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55059"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55060"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55061"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55063"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4bde41110b47",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-720000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a7d076a4985",
	                        "old-k8s-version-720000"
	                    ],
	                    "NetworkID": "4a101da36ff964d86adf1945f3a9a22581d700864206dd1558c9c4957ae7df32",
	                    "EndpointID": "478be67097d6d7fc644b5283841991783cd14bb90384bfcac470136266a2f598",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 6 (418.827961ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:29:33.800144   23094 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-720000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-720000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (498.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-720000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0127 20:29:38.896508    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:38.901737    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:38.913382    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:38.933542    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:38.973724    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:39.054657    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:39.221408    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:39.542444    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:39.589521    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:29:40.183116    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:41.464501    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:44.024996    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:49.180693    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:29:50.110369    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:29:50.397375    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:29:59.422884    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:30:17.794681    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:30:18.090008    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:30:19.903109    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:30:53.008463    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:30:53.054170    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:30:56.625683    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:31:00.864130    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:31:28.872646    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:31:30.203910    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:31:56.626179    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:31:57.897680    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:32:22.784599    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:32:51.234539    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:33:09.158184    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:33:09.211127    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:33:25.378591    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 20:33:27.786322    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 20:33:36.849489    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:33:36.895579    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:33:44.728959    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
E0127 20:34:38.898071    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:34:39.590394    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:34:50.111085    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:34:50.398387    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:35:06.627378    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:35:56.626949    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-720000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m13.268960429s)

                                                
                                                
-- stdout --
	* [old-k8s-version-720000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-720000 in cluster old-k8s-version-720000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-720000" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 20:29:35.926056   23124 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:29:35.926205   23124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:29:35.926210   23124 out.go:309] Setting ErrFile to fd 2...
	I0127 20:29:35.926214   23124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:29:35.926344   23124 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 20:29:35.926835   23124 out.go:303] Setting JSON to false
	I0127 20:29:35.946957   23124 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5349,"bootTime":1674874826,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 20:29:35.947062   23124 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:29:35.969258   23124 out.go:177] * [old-k8s-version-720000] minikube v1.28.0 on Darwin 13.2
	I0127 20:29:36.011251   23124 notify.go:220] Checking for updates...
	I0127 20:29:36.032895   23124 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:29:36.074921   23124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:29:36.095985   23124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:29:36.117160   23124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:29:36.138113   23124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 20:29:36.158951   23124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:29:36.180742   23124 config.go:180] Loaded profile config "old-k8s-version-720000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0127 20:29:36.202960   23124 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0127 20:29:36.224031   23124 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:29:36.286792   23124 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 20:29:36.286953   23124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:29:36.432331   23124 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 04:29:36.339223074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:29:36.454285   23124 out.go:177] * Using the docker driver based on existing profile
	I0127 20:29:36.476220   23124 start.go:296] selected driver: docker
	I0127 20:29:36.476244   23124 start.go:840] validating driver "docker" against &{Name:old-k8s-version-720000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-720000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:29:36.476361   23124 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:29:36.480472   23124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:29:36.627579   23124 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 04:29:36.531762522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:29:36.627759   23124 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 20:29:36.627780   23124 cni.go:84] Creating CNI manager for ""
	I0127 20:29:36.627793   23124 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 20:29:36.627804   23124 start_flags.go:319] config:
	{Name:old-k8s-version-720000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-720000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:29:36.649578   23124 out.go:177] * Starting control plane node old-k8s-version-720000 in cluster old-k8s-version-720000
	I0127 20:29:36.672549   23124 cache.go:120] Beginning downloading kic base image for docker with docker
	I0127 20:29:36.694505   23124 out.go:177] * Pulling base image ...
	I0127 20:29:36.737277   23124 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0127 20:29:36.737273   23124 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:29:36.737396   23124 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0127 20:29:36.737423   23124 cache.go:57] Caching tarball of preloaded images
	I0127 20:29:36.738117   23124 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:29:36.738290   23124 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0127 20:29:36.738748   23124 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/config.json ...
	I0127 20:29:36.794864   23124 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0127 20:29:36.794883   23124 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0127 20:29:36.794906   23124 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:29:36.794956   23124 start.go:364] acquiring machines lock for old-k8s-version-720000: {Name:mk4c4e23ea55570fd8854da14e914c261c97da33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:29:36.795047   23124 start.go:368] acquired machines lock for "old-k8s-version-720000" in 71.133µs
	I0127 20:29:36.795070   23124 start.go:96] Skipping create...Using existing machine configuration
	I0127 20:29:36.795079   23124 fix.go:55] fixHost starting: 
	I0127 20:29:36.795345   23124 cli_runner.go:164] Run: docker container inspect old-k8s-version-720000 --format={{.State.Status}}
	I0127 20:29:36.853042   23124 fix.go:103] recreateIfNeeded on old-k8s-version-720000: state=Stopped err=<nil>
	W0127 20:29:36.853075   23124 fix.go:129] unexpected machine state, will restart: <nil>
	I0127 20:29:36.895334   23124 out.go:177] * Restarting existing docker container for "old-k8s-version-720000" ...
	I0127 20:29:36.932943   23124 cli_runner.go:164] Run: docker start old-k8s-version-720000
	I0127 20:29:37.276027   23124 cli_runner.go:164] Run: docker container inspect old-k8s-version-720000 --format={{.State.Status}}
	I0127 20:29:37.339796   23124 kic.go:426] container "old-k8s-version-720000" state is running.
	I0127 20:29:37.340397   23124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-720000
	I0127 20:29:37.406637   23124 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/config.json ...
	I0127 20:29:37.407044   23124 machine.go:88] provisioning docker machine ...
	I0127 20:29:37.407071   23124 ubuntu.go:169] provisioning hostname "old-k8s-version-720000"
	I0127 20:29:37.407174   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:37.479322   23124 main.go:141] libmachine: Using SSH client type: native
	I0127 20:29:37.479531   23124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55384 <nil> <nil>}
	I0127 20:29:37.479545   23124 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-720000 && echo "old-k8s-version-720000" | sudo tee /etc/hostname
	I0127 20:29:37.629399   23124 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-720000
	
	I0127 20:29:37.629502   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:37.692383   23124 main.go:141] libmachine: Using SSH client type: native
	I0127 20:29:37.692541   23124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55384 <nil> <nil>}
	I0127 20:29:37.692554   23124 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-720000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-720000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-720000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 20:29:37.829319   23124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:29:37.829345   23124 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3092/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3092/.minikube}
	I0127 20:29:37.829362   23124 ubuntu.go:177] setting up certificates
	I0127 20:29:37.829374   23124 provision.go:83] configureAuth start
	I0127 20:29:37.829459   23124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-720000
	I0127 20:29:37.889142   23124 provision.go:138] copyHostCerts
	I0127 20:29:37.889254   23124 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem, removing ...
	I0127 20:29:37.889264   23124 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem
	I0127 20:29:37.889380   23124 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem (1078 bytes)
	I0127 20:29:37.889593   23124 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem, removing ...
	I0127 20:29:37.889601   23124 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem
	I0127 20:29:37.889665   23124 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem (1123 bytes)
	I0127 20:29:37.889822   23124 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem, removing ...
	I0127 20:29:37.889830   23124 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem
	I0127 20:29:37.889931   23124 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem (1679 bytes)
	I0127 20:29:37.890079   23124 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-720000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-720000]
	I0127 20:29:38.175979   23124 provision.go:172] copyRemoteCerts
	I0127 20:29:38.176048   23124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 20:29:38.176099   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:38.238789   23124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55384 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:29:38.334333   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 20:29:38.351978   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 20:29:38.369898   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 20:29:38.387867   23124 provision.go:86] duration metric: configureAuth took 558.478511ms
	I0127 20:29:38.387881   23124 ubuntu.go:193] setting minikube options for container-runtime
	I0127 20:29:38.388061   23124 config.go:180] Loaded profile config "old-k8s-version-720000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0127 20:29:38.388126   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:38.446078   23124 main.go:141] libmachine: Using SSH client type: native
	I0127 20:29:38.446241   23124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55384 <nil> <nil>}
	I0127 20:29:38.446250   23124 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 20:29:38.579745   23124 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0127 20:29:38.579761   23124 ubuntu.go:71] root file system type: overlay
	I0127 20:29:38.579914   23124 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 20:29:38.580006   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:38.639588   23124 main.go:141] libmachine: Using SSH client type: native
	I0127 20:29:38.639755   23124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55384 <nil> <nil>}
	I0127 20:29:38.639815   23124 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 20:29:38.785081   23124 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 20:29:38.785188   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:38.844566   23124 main.go:141] libmachine: Using SSH client type: native
	I0127 20:29:38.844716   23124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 55384 <nil> <nil>}
	I0127 20:29:38.844733   23124 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 20:29:38.984111   23124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:29:38.984125   23124 machine.go:91] provisioned docker machine in 1.577068151s
	I0127 20:29:38.984138   23124 start.go:300] post-start starting for "old-k8s-version-720000" (driver="docker")
	I0127 20:29:38.984145   23124 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 20:29:38.984212   23124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 20:29:38.984263   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:39.070437   23124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55384 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:29:39.163905   23124 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 20:29:39.167677   23124 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 20:29:39.167693   23124 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 20:29:39.167699   23124 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 20:29:39.167704   23124 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0127 20:29:39.167712   23124 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/addons for local assets ...
	I0127 20:29:39.167818   23124 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/files for local assets ...
	I0127 20:29:39.167999   23124 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> 44062.pem in /etc/ssl/certs
	I0127 20:29:39.168195   23124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 20:29:39.175777   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /etc/ssl/certs/44062.pem (1708 bytes)
	I0127 20:29:39.193439   23124 start.go:303] post-start completed in 209.289492ms
	I0127 20:29:39.193528   23124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 20:29:39.193587   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:39.254791   23124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55384 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:29:39.346986   23124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 20:29:39.351581   23124 fix.go:57] fixHost completed within 2.55649346s
	I0127 20:29:39.351595   23124 start.go:83] releasing machines lock for "old-k8s-version-720000", held for 2.556532934s
	I0127 20:29:39.351691   23124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-720000
	I0127 20:29:39.412092   23124 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0127 20:29:39.412092   23124 ssh_runner.go:195] Run: cat /version.json
	I0127 20:29:39.412186   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:39.412192   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:39.477204   23124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55384 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:29:39.477361   23124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55384 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/old-k8s-version-720000/id_rsa Username:docker}
	I0127 20:29:39.774449   23124 ssh_runner.go:195] Run: systemctl --version
	I0127 20:29:39.779138   23124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 20:29:39.784167   23124 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 20:29:39.784251   23124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0127 20:29:39.792384   23124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0127 20:29:39.800156   23124 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0127 20:29:39.800172   23124 start.go:472] detecting cgroup driver to use...
	I0127 20:29:39.800189   23124 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 20:29:39.800277   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:29:39.813878   23124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0127 20:29:39.822669   23124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 20:29:39.831962   23124 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 20:29:39.832023   23124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 20:29:39.840794   23124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:29:39.849315   23124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 20:29:39.857906   23124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:29:39.866460   23124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 20:29:39.874571   23124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 20:29:39.883255   23124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 20:29:39.891055   23124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 20:29:39.898447   23124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:29:39.974015   23124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 20:29:40.046211   23124 start.go:472] detecting cgroup driver to use...
	I0127 20:29:40.046234   23124 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 20:29:40.046311   23124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 20:29:40.058882   23124 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0127 20:29:40.058964   23124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:29:40.070862   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:29:40.085860   23124 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 20:29:40.185484   23124 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 20:29:40.277259   23124 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 20:29:40.277291   23124 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 20:29:40.292086   23124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:29:40.381215   23124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:29:40.591858   23124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:29:40.623768   23124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:29:40.698358   23124 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.22 ...
	I0127 20:29:40.698510   23124 cli_runner.go:164] Run: docker exec -t old-k8s-version-720000 dig +short host.docker.internal
	I0127 20:29:40.817091   23124 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0127 20:29:40.817194   23124 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0127 20:29:40.821768   23124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:29:40.832096   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:40.893362   23124 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:29:40.893445   23124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:29:40.919078   23124 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0127 20:29:40.919096   23124 docker.go:560] Images already preloaded, skipping extraction
	I0127 20:29:40.919190   23124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:29:40.944199   23124 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0127 20:29:40.948129   23124 cache_images.go:84] Images are preloaded, skipping loading
	I0127 20:29:40.948216   23124 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 20:29:41.023476   23124 cni.go:84] Creating CNI manager for ""
	I0127 20:29:41.023494   23124 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 20:29:41.023549   23124 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0127 20:29:41.023564   23124 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-720000 NodeName:old-k8s-version-720000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0127 20:29:41.023711   23124 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-720000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-720000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 20:29:41.023844   23124 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-720000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-720000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 20:29:41.023926   23124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0127 20:29:41.032486   23124 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 20:29:41.032558   23124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 20:29:41.040478   23124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0127 20:29:41.053617   23124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 20:29:41.066890   23124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0127 20:29:41.080421   23124 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0127 20:29:41.084418   23124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:29:41.094997   23124 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000 for IP: 192.168.76.2
	I0127 20:29:41.095036   23124 certs.go:186] acquiring lock for shared ca certs: {Name:mk2d86ad31f10478b3fe72eedd54ef2fcd74cf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:29:41.095229   23124 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key
	I0127 20:29:41.095320   23124 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key
	I0127 20:29:41.095442   23124 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/client.key
	I0127 20:29:41.095556   23124 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.key.31bdca25
	I0127 20:29:41.095652   23124 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.key
	I0127 20:29:41.095885   23124 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem (1338 bytes)
	W0127 20:29:41.095930   23124 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406_empty.pem, impossibly tiny 0 bytes
	I0127 20:29:41.095950   23124 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 20:29:41.095992   23124 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem (1078 bytes)
	I0127 20:29:41.096065   23124 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem (1123 bytes)
	I0127 20:29:41.096111   23124 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem (1679 bytes)
	I0127 20:29:41.096181   23124 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem (1708 bytes)
	I0127 20:29:41.096821   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 20:29:41.116356   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 20:29:41.135176   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 20:29:41.153347   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/old-k8s-version-720000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 20:29:41.171707   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 20:29:41.189470   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 20:29:41.208347   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 20:29:41.227005   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 20:29:41.246970   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /usr/share/ca-certificates/44062.pem (1708 bytes)
	I0127 20:29:41.265377   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 20:29:41.282978   23124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem --> /usr/share/ca-certificates/4406.pem (1338 bytes)
	I0127 20:29:41.301175   23124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 20:29:41.315122   23124 ssh_runner.go:195] Run: openssl version
	I0127 20:29:41.320925   23124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44062.pem && ln -fs /usr/share/ca-certificates/44062.pem /etc/ssl/certs/44062.pem"
	I0127 20:29:41.329343   23124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44062.pem
	I0127 20:29:41.334017   23124 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:36 /usr/share/ca-certificates/44062.pem
	I0127 20:29:41.334084   23124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44062.pem
	I0127 20:29:41.339818   23124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44062.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 20:29:41.347550   23124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 20:29:41.357768   23124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:29:41.362750   23124 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:31 /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:29:41.362840   23124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:29:41.369108   23124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 20:29:41.376940   23124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4406.pem && ln -fs /usr/share/ca-certificates/4406.pem /etc/ssl/certs/4406.pem"
	I0127 20:29:41.385612   23124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4406.pem
	I0127 20:29:41.389885   23124 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:36 /usr/share/ca-certificates/4406.pem
	I0127 20:29:41.389929   23124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4406.pem
	I0127 20:29:41.395678   23124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4406.pem /etc/ssl/certs/51391683.0"
	I0127 20:29:41.403528   23124 kubeadm.go:401] StartCluster: {Name:old-k8s-version-720000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-720000 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:29:41.403646   23124 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 20:29:41.426637   23124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 20:29:41.434857   23124 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0127 20:29:41.434872   23124 kubeadm.go:633] restartCluster start
	I0127 20:29:41.434929   23124 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 20:29:41.442079   23124 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:41.442174   23124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-720000
	I0127 20:29:41.503501   23124 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-720000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:29:41.503665   23124 kubeconfig.go:146] "old-k8s-version-720000" context is missing from /Users/jenkins/minikube-integration/15565-3092/kubeconfig - will repair!
	I0127 20:29:41.504000   23124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/kubeconfig: {Name:mkdfca390fbcfbb59336162afe07d375994efabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:29:41.505364   23124 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 20:29:41.513733   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:41.513821   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:41.522663   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:42.022770   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:42.022922   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:42.033156   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:42.524783   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:42.525065   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:42.536490   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:43.023675   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:43.023894   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:43.034775   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:43.522874   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:43.523069   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:43.533974   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:44.024785   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:44.024941   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:44.036266   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:44.522770   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:44.522865   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:44.532965   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:45.023948   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:45.024170   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:45.035075   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:45.522859   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:45.523001   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:45.534124   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:46.023106   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:46.023354   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:46.034600   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:46.523782   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:46.524014   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:46.535152   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:47.024223   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:47.024363   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:47.035748   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:47.524047   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:47.524223   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:47.535273   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:48.024396   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:48.024503   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:48.035681   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:48.523674   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:48.523795   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:48.534758   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:49.022783   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:49.022919   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:49.033190   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:49.524331   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:49.524520   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:49.536164   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:50.023310   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:50.023429   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:50.033931   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:50.523435   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:50.523601   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:50.534792   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:51.023002   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:51.023196   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:51.034133   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:51.524133   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:51.524380   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:51.535542   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:51.535553   23124 api_server.go:165] Checking apiserver status ...
	I0127 20:29:51.535607   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:29:51.544120   23124 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:29:51.544132   23124 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0127 20:29:51.544137   23124 kubeadm.go:1120] stopping kube-system containers ...
	I0127 20:29:51.544215   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 20:29:51.567279   23124 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 20:29:51.578132   23124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:29:51.586168   23124 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jan 28 04:26 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Jan 28 04:26 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Jan 28 04:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan 28 04:26 /etc/kubernetes/scheduler.conf
	
	I0127 20:29:51.586230   23124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 20:29:51.594124   23124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 20:29:51.601752   23124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 20:29:51.609409   23124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 20:29:51.617231   23124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 20:29:51.625164   23124 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0127 20:29:51.625180   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:29:51.680633   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:29:52.313577   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:29:52.528635   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:29:52.590829   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:29:52.673401   23124 api_server.go:51] waiting for apiserver process to appear ...
	I0127 20:29:52.673568   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:53.183533   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:53.684029   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:54.183677   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:54.683219   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:55.183636   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:55.683303   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:56.183205   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:56.683346   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:57.184006   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:57.684154   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:58.185286   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:58.684782   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:59.183220   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:29:59.684051   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:00.185297   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:00.685299   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:01.183288   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:01.683309   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:02.183407   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:02.683742   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:03.183377   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:03.683308   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:04.184943   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:04.683558   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:05.183417   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:05.684573   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:06.185326   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:06.684002   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:07.185338   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:07.684490   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:08.184140   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:08.683542   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:09.183646   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:09.684520   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:10.183880   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:10.683866   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:11.185323   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:11.685418   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:12.185377   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:12.683558   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:13.185286   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:13.684597   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:14.183347   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:14.684081   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:15.183475   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:15.683214   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:16.184813   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:16.684830   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:17.183827   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:17.685444   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:18.183327   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:18.683838   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:19.185360   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:19.683589   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:20.183399   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:20.684488   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:21.183306   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:21.683293   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:22.183257   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:22.683360   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:23.183715   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:23.683374   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:24.185333   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:24.683319   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:25.184126   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:25.684020   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:26.184380   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:26.685342   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:27.184551   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:27.683628   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:28.183331   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:28.683416   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:29.185328   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:29.684938   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:30.183746   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:30.683308   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:31.183837   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:31.683392   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:32.185188   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:32.684405   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:33.184788   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:33.683610   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:34.184035   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:34.685386   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:35.185474   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:35.685480   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:36.184606   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:36.684515   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:37.183498   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:37.683487   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:38.183499   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:38.683300   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:39.183614   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:39.685542   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:40.185421   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:40.683641   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:41.183282   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:41.685421   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:42.183925   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:42.684639   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:43.184333   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:43.684070   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:44.185487   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:44.685528   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:45.185381   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:45.685272   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:46.184156   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:46.683972   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:47.185061   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:47.685558   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:48.185474   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:48.685430   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:49.185480   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:49.683373   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:50.183605   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:50.685568   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:51.183548   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:51.684736   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:52.185434   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:52.683436   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:30:52.709192   23124 logs.go:279] 0 containers: []
	W0127 20:30:52.709206   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:30:52.709274   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:30:52.731715   23124 logs.go:279] 0 containers: []
	W0127 20:30:52.731731   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:30:52.731805   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:30:52.755747   23124 logs.go:279] 0 containers: []
	W0127 20:30:52.755761   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:30:52.755840   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:30:52.779725   23124 logs.go:279] 0 containers: []
	W0127 20:30:52.779739   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:30:52.779821   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:30:52.803569   23124 logs.go:279] 0 containers: []
	W0127 20:30:52.803583   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:30:52.803654   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:30:52.827317   23124 logs.go:279] 0 containers: []
	W0127 20:30:52.827331   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:30:52.827404   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:30:52.850922   23124 logs.go:279] 0 containers: []
	W0127 20:30:52.850936   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:30:52.851002   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:30:52.873474   23124 logs.go:279] 0 containers: []
	W0127 20:30:52.873488   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:30:52.873501   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:30:52.873508   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:30:52.913972   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:30:52.913988   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:30:52.926285   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:30:52.926300   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:30:52.984318   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:30:52.984336   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:30:52.984343   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:30:53.001001   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:30:53.001015   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:30:55.051812   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050778729s)
	I0127 20:30:57.553669   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:30:57.683502   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:30:57.709092   23124 logs.go:279] 0 containers: []
	W0127 20:30:57.709108   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:30:57.709178   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:30:57.733009   23124 logs.go:279] 0 containers: []
	W0127 20:30:57.733036   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:30:57.733130   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:30:57.757787   23124 logs.go:279] 0 containers: []
	W0127 20:30:57.757803   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:30:57.757875   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:30:57.781963   23124 logs.go:279] 0 containers: []
	W0127 20:30:57.781977   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:30:57.782064   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:30:57.805968   23124 logs.go:279] 0 containers: []
	W0127 20:30:57.805982   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:30:57.806067   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:30:57.830081   23124 logs.go:279] 0 containers: []
	W0127 20:30:57.830095   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:30:57.830169   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:30:57.864612   23124 logs.go:279] 0 containers: []
	W0127 20:30:57.864628   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:30:57.864703   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:30:57.888372   23124 logs.go:279] 0 containers: []
	W0127 20:30:57.888392   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:30:57.888406   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:30:57.888419   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:30:57.929981   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:30:57.929997   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:30:57.943083   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:30:57.943096   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:30:57.999011   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:30:57.999026   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:30:57.999033   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:30:58.014816   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:30:58.014828   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:00.064317   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04947062s)
	I0127 20:31:02.565368   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:02.685558   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:02.711257   23124 logs.go:279] 0 containers: []
	W0127 20:31:02.711272   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:02.711341   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:02.734258   23124 logs.go:279] 0 containers: []
	W0127 20:31:02.734272   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:02.734343   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:02.757244   23124 logs.go:279] 0 containers: []
	W0127 20:31:02.757259   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:02.757337   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:02.782513   23124 logs.go:279] 0 containers: []
	W0127 20:31:02.782528   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:02.782605   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:02.805848   23124 logs.go:279] 0 containers: []
	W0127 20:31:02.805861   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:02.805930   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:02.829882   23124 logs.go:279] 0 containers: []
	W0127 20:31:02.829896   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:02.829965   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:02.853168   23124 logs.go:279] 0 containers: []
	W0127 20:31:02.853181   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:02.853248   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:02.876557   23124 logs.go:279] 0 containers: []
	W0127 20:31:02.876571   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:02.876578   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:02.876585   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:02.915889   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:02.915905   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:02.928682   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:02.928696   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:02.987886   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:02.987900   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:02.987907   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:03.005633   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:03.005648   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:05.060153   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054484296s)
	I0127 20:31:07.560843   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:07.685583   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:07.711567   23124 logs.go:279] 0 containers: []
	W0127 20:31:07.711581   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:07.711653   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:07.734498   23124 logs.go:279] 0 containers: []
	W0127 20:31:07.734511   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:07.734579   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:07.758028   23124 logs.go:279] 0 containers: []
	W0127 20:31:07.758041   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:07.758115   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:07.781157   23124 logs.go:279] 0 containers: []
	W0127 20:31:07.781169   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:07.781241   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:07.804218   23124 logs.go:279] 0 containers: []
	W0127 20:31:07.804233   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:07.804306   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:07.827490   23124 logs.go:279] 0 containers: []
	W0127 20:31:07.827504   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:07.827576   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:07.850598   23124 logs.go:279] 0 containers: []
	W0127 20:31:07.850612   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:07.850683   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:07.874810   23124 logs.go:279] 0 containers: []
	W0127 20:31:07.874825   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:07.874832   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:07.874839   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:07.932103   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:07.932120   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:07.932126   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:07.948063   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:07.948075   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:10.000280   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052187157s)
	I0127 20:31:10.000393   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:10.000400   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:10.039100   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:10.039114   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:12.551613   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:12.683609   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:12.708913   23124 logs.go:279] 0 containers: []
	W0127 20:31:12.708927   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:12.709001   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:12.731926   23124 logs.go:279] 0 containers: []
	W0127 20:31:12.731940   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:12.732008   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:12.756329   23124 logs.go:279] 0 containers: []
	W0127 20:31:12.756343   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:12.756414   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:12.780828   23124 logs.go:279] 0 containers: []
	W0127 20:31:12.780844   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:12.780925   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:12.805763   23124 logs.go:279] 0 containers: []
	W0127 20:31:12.805779   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:12.805855   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:12.830475   23124 logs.go:279] 0 containers: []
	W0127 20:31:12.830493   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:12.830571   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:12.861729   23124 logs.go:279] 0 containers: []
	W0127 20:31:12.861743   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:12.861817   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:12.885361   23124 logs.go:279] 0 containers: []
	W0127 20:31:12.885376   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:12.885383   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:12.885390   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:14.935873   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050464762s)
	I0127 20:31:14.935984   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:14.935991   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:14.974165   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:14.974183   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:14.986952   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:14.986971   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:15.054896   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:15.054910   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:15.054918   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:17.571526   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:17.683419   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:17.710278   23124 logs.go:279] 0 containers: []
	W0127 20:31:17.710294   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:17.710365   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:17.736517   23124 logs.go:279] 0 containers: []
	W0127 20:31:17.736531   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:17.736603   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:17.761639   23124 logs.go:279] 0 containers: []
	W0127 20:31:17.761654   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:17.761737   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:17.788856   23124 logs.go:279] 0 containers: []
	W0127 20:31:17.788870   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:17.788958   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:17.814554   23124 logs.go:279] 0 containers: []
	W0127 20:31:17.814569   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:17.814649   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:17.840112   23124 logs.go:279] 0 containers: []
	W0127 20:31:17.840128   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:17.840225   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:17.867180   23124 logs.go:279] 0 containers: []
	W0127 20:31:17.867194   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:17.867275   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:17.893181   23124 logs.go:279] 0 containers: []
	W0127 20:31:17.893196   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:17.893204   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:17.893211   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:17.905951   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:17.905970   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:17.966316   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:17.966329   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:17.966335   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:17.983975   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:17.983998   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:20.037229   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05321112s)
	I0127 20:31:20.037337   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:20.037344   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:22.579721   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:22.683509   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:22.710057   23124 logs.go:279] 0 containers: []
	W0127 20:31:22.710071   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:22.710139   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:22.733640   23124 logs.go:279] 0 containers: []
	W0127 20:31:22.733654   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:22.733726   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:22.756289   23124 logs.go:279] 0 containers: []
	W0127 20:31:22.756305   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:22.756409   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:22.795358   23124 logs.go:279] 0 containers: []
	W0127 20:31:22.795374   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:22.795481   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:22.819801   23124 logs.go:279] 0 containers: []
	W0127 20:31:22.819816   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:22.819918   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:22.843048   23124 logs.go:279] 0 containers: []
	W0127 20:31:22.843062   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:22.843135   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:22.869907   23124 logs.go:279] 0 containers: []
	W0127 20:31:22.869925   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:22.870006   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:22.898666   23124 logs.go:279] 0 containers: []
	W0127 20:31:22.898687   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:22.898697   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:22.898707   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:22.966717   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:22.966730   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:22.966739   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:22.984886   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:22.984902   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:25.037498   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052572385s)
	I0127 20:31:25.037638   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:25.037647   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:25.080382   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:25.080406   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:27.594477   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:27.683603   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:27.707660   23124 logs.go:279] 0 containers: []
	W0127 20:31:27.707674   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:27.707744   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:27.730551   23124 logs.go:279] 0 containers: []
	W0127 20:31:27.730566   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:27.730637   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:27.758869   23124 logs.go:279] 0 containers: []
	W0127 20:31:27.758888   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:27.758981   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:27.785993   23124 logs.go:279] 0 containers: []
	W0127 20:31:27.786011   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:27.786084   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:27.819113   23124 logs.go:279] 0 containers: []
	W0127 20:31:27.819130   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:27.819206   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:27.846567   23124 logs.go:279] 0 containers: []
	W0127 20:31:27.846586   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:27.846671   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:27.873357   23124 logs.go:279] 0 containers: []
	W0127 20:31:27.873373   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:27.873446   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:27.903625   23124 logs.go:279] 0 containers: []
	W0127 20:31:27.903640   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:27.903650   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:27.903661   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:29.957621   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053939394s)
	I0127 20:31:29.957737   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:29.957745   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:29.999858   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:29.999877   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:30.014326   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:30.014350   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:30.080887   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:30.080920   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:30.080927   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:32.597775   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:32.685591   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:32.710850   23124 logs.go:279] 0 containers: []
	W0127 20:31:32.710863   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:32.710932   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:32.735298   23124 logs.go:279] 0 containers: []
	W0127 20:31:32.735312   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:32.735380   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:32.760322   23124 logs.go:279] 0 containers: []
	W0127 20:31:32.760337   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:32.760406   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:32.784579   23124 logs.go:279] 0 containers: []
	W0127 20:31:32.784594   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:32.784665   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:32.806950   23124 logs.go:279] 0 containers: []
	W0127 20:31:32.806969   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:32.807038   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:32.831056   23124 logs.go:279] 0 containers: []
	W0127 20:31:32.831070   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:32.831142   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:32.855083   23124 logs.go:279] 0 containers: []
	W0127 20:31:32.855098   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:32.855170   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:32.879397   23124 logs.go:279] 0 containers: []
	W0127 20:31:32.879410   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:32.879417   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:32.879424   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:32.920274   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:32.920288   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:32.932799   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:32.932815   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:32.995040   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:32.995052   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:32.995059   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:33.013530   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:33.013546   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:35.066939   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053374357s)
	I0127 20:31:37.567142   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:37.683635   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:37.707542   23124 logs.go:279] 0 containers: []
	W0127 20:31:37.707556   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:37.707628   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:37.730817   23124 logs.go:279] 0 containers: []
	W0127 20:31:37.730832   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:37.730910   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:37.754985   23124 logs.go:279] 0 containers: []
	W0127 20:31:37.754999   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:37.755083   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:37.780817   23124 logs.go:279] 0 containers: []
	W0127 20:31:37.780834   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:37.780907   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:37.804860   23124 logs.go:279] 0 containers: []
	W0127 20:31:37.804875   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:37.804944   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:37.830038   23124 logs.go:279] 0 containers: []
	W0127 20:31:37.830053   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:37.830139   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:37.854254   23124 logs.go:279] 0 containers: []
	W0127 20:31:37.854269   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:37.854338   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:37.879132   23124 logs.go:279] 0 containers: []
	W0127 20:31:37.879146   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:37.879153   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:37.879160   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:37.936359   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:37.936373   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:37.936381   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:37.952462   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:37.952477   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:40.005361   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05286609s)
	I0127 20:31:40.005476   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:40.005485   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:40.048598   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:40.048615   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:42.567606   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:42.683619   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:42.717200   23124 logs.go:279] 0 containers: []
	W0127 20:31:42.717215   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:42.717289   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:42.743252   23124 logs.go:279] 0 containers: []
	W0127 20:31:42.743267   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:42.743346   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:42.774058   23124 logs.go:279] 0 containers: []
	W0127 20:31:42.774077   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:42.774166   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:42.807423   23124 logs.go:279] 0 containers: []
	W0127 20:31:42.807441   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:42.807593   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:42.838630   23124 logs.go:279] 0 containers: []
	W0127 20:31:42.838644   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:42.838714   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:42.869657   23124 logs.go:279] 0 containers: []
	W0127 20:31:42.869675   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:42.869773   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:42.905847   23124 logs.go:279] 0 containers: []
	W0127 20:31:42.905862   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:42.905980   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:42.930456   23124 logs.go:279] 0 containers: []
	W0127 20:31:42.930470   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:42.930478   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:42.930485   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:42.971945   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:42.971969   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:42.991893   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:42.991914   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:43.056203   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:43.056224   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:43.056261   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:43.078549   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:43.078569   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:45.154496   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.075903638s)
	I0127 20:31:47.654873   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:47.683690   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:47.711279   23124 logs.go:279] 0 containers: []
	W0127 20:31:47.711293   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:47.711364   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:47.735296   23124 logs.go:279] 0 containers: []
	W0127 20:31:47.735310   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:47.735413   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:47.759470   23124 logs.go:279] 0 containers: []
	W0127 20:31:47.759486   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:47.759617   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:47.785323   23124 logs.go:279] 0 containers: []
	W0127 20:31:47.785344   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:47.785455   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:47.811611   23124 logs.go:279] 0 containers: []
	W0127 20:31:47.811625   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:47.811713   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:47.835902   23124 logs.go:279] 0 containers: []
	W0127 20:31:47.835918   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:47.835996   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:47.860026   23124 logs.go:279] 0 containers: []
	W0127 20:31:47.860044   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:47.860156   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:47.887698   23124 logs.go:279] 0 containers: []
	W0127 20:31:47.887714   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:47.887724   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:47.887734   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:47.944926   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:47.944946   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:47.958624   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:47.958640   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:48.026777   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:48.026800   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:48.026808   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:48.043427   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:48.043445   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:50.093989   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050525969s)
	I0127 20:31:52.595305   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:52.683522   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:52.709712   23124 logs.go:279] 0 containers: []
	W0127 20:31:52.709726   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:52.709807   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:52.731618   23124 logs.go:279] 0 containers: []
	W0127 20:31:52.731633   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:52.731703   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:52.755929   23124 logs.go:279] 0 containers: []
	W0127 20:31:52.755944   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:52.756033   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:52.780526   23124 logs.go:279] 0 containers: []
	W0127 20:31:52.780542   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:52.780644   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:52.805067   23124 logs.go:279] 0 containers: []
	W0127 20:31:52.805083   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:52.805176   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:52.829587   23124 logs.go:279] 0 containers: []
	W0127 20:31:52.829601   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:52.829673   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:52.853035   23124 logs.go:279] 0 containers: []
	W0127 20:31:52.853084   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:52.853203   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:52.876803   23124 logs.go:279] 0 containers: []
	W0127 20:31:52.876819   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:52.876828   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:31:52.876844   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:31:52.921221   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:52.921241   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:52.934246   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:52.934259   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:52.994608   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:52.994623   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:52.994634   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:53.012720   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:53.012739   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:31:55.065372   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052611985s)
	I0127 20:31:57.566570   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:31:57.683580   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:31:57.707475   23124 logs.go:279] 0 containers: []
	W0127 20:31:57.707489   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:31:57.707564   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:31:57.733460   23124 logs.go:279] 0 containers: []
	W0127 20:31:57.733474   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:31:57.733549   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:31:57.757695   23124 logs.go:279] 0 containers: []
	W0127 20:31:57.757712   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:31:57.757792   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:31:57.782252   23124 logs.go:279] 0 containers: []
	W0127 20:31:57.782267   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:31:57.782341   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:31:57.807415   23124 logs.go:279] 0 containers: []
	W0127 20:31:57.807431   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:31:57.807521   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:31:57.832512   23124 logs.go:279] 0 containers: []
	W0127 20:31:57.832527   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:31:57.832609   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:31:57.860934   23124 logs.go:279] 0 containers: []
	W0127 20:31:57.860948   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:31:57.861039   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:31:57.885796   23124 logs.go:279] 0 containers: []
	W0127 20:31:57.885810   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:31:57.885817   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:31:57.885824   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:31:57.897940   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:31:57.897957   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:31:57.954237   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:31:57.954249   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:31:57.954257   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:31:57.970536   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:31:57.970549   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:00.021139   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050570203s)
	I0127 20:32:00.021282   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:00.021290   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:02.562375   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:02.683582   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:02.706995   23124 logs.go:279] 0 containers: []
	W0127 20:32:02.707009   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:02.707078   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:02.729358   23124 logs.go:279] 0 containers: []
	W0127 20:32:02.729372   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:02.729463   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:02.753178   23124 logs.go:279] 0 containers: []
	W0127 20:32:02.753190   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:02.753261   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:02.777495   23124 logs.go:279] 0 containers: []
	W0127 20:32:02.777510   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:02.777589   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:02.801312   23124 logs.go:279] 0 containers: []
	W0127 20:32:02.801326   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:02.801395   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:02.826793   23124 logs.go:279] 0 containers: []
	W0127 20:32:02.826809   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:02.826883   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:02.850667   23124 logs.go:279] 0 containers: []
	W0127 20:32:02.850681   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:02.850755   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:02.874650   23124 logs.go:279] 0 containers: []
	W0127 20:32:02.874663   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:02.874670   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:02.874678   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:02.914289   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:02.914303   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:02.926760   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:02.926794   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:02.983656   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:02.983668   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:02.983676   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:03.001164   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:03.001179   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:05.049291   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048093826s)
	I0127 20:32:07.550014   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:07.684811   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:07.711046   23124 logs.go:279] 0 containers: []
	W0127 20:32:07.711061   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:07.711133   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:07.736393   23124 logs.go:279] 0 containers: []
	W0127 20:32:07.736408   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:07.736476   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:07.759951   23124 logs.go:279] 0 containers: []
	W0127 20:32:07.759965   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:07.760034   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:07.783972   23124 logs.go:279] 0 containers: []
	W0127 20:32:07.783985   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:07.784055   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:07.808146   23124 logs.go:279] 0 containers: []
	W0127 20:32:07.808160   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:07.808229   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:07.831275   23124 logs.go:279] 0 containers: []
	W0127 20:32:07.831289   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:07.831360   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:07.854812   23124 logs.go:279] 0 containers: []
	W0127 20:32:07.854827   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:07.854902   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:07.878959   23124 logs.go:279] 0 containers: []
	W0127 20:32:07.878974   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:07.878981   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:07.878989   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:07.917607   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:07.917621   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:07.930883   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:07.930898   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:07.987742   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:07.987754   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:07.987762   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:08.005306   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:08.005321   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:10.055040   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049701451s)
	I0127 20:32:12.556297   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:12.683921   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:12.707452   23124 logs.go:279] 0 containers: []
	W0127 20:32:12.707467   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:12.707567   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:12.731535   23124 logs.go:279] 0 containers: []
	W0127 20:32:12.731550   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:12.731623   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:12.758318   23124 logs.go:279] 0 containers: []
	W0127 20:32:12.758335   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:12.758413   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:12.783034   23124 logs.go:279] 0 containers: []
	W0127 20:32:12.783048   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:12.783124   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:12.807983   23124 logs.go:279] 0 containers: []
	W0127 20:32:12.807997   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:12.808071   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:12.864826   23124 logs.go:279] 0 containers: []
	W0127 20:32:12.864842   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:12.864923   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:12.888806   23124 logs.go:279] 0 containers: []
	W0127 20:32:12.888820   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:12.888889   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:12.912573   23124 logs.go:279] 0 containers: []
	W0127 20:32:12.912590   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:12.912599   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:12.912609   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:12.953504   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:12.953519   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:12.965537   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:12.965553   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:13.021150   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:13.021161   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:13.021168   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:13.037192   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:13.037206   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:15.087191   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049967249s)
	I0127 20:32:17.587971   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:17.684202   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:17.709992   23124 logs.go:279] 0 containers: []
	W0127 20:32:17.710007   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:17.710078   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:17.734273   23124 logs.go:279] 0 containers: []
	W0127 20:32:17.734287   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:17.734356   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:17.759185   23124 logs.go:279] 0 containers: []
	W0127 20:32:17.759199   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:17.759269   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:17.782800   23124 logs.go:279] 0 containers: []
	W0127 20:32:17.782813   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:17.782888   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:17.805982   23124 logs.go:279] 0 containers: []
	W0127 20:32:17.805996   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:17.806076   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:17.831403   23124 logs.go:279] 0 containers: []
	W0127 20:32:17.831417   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:17.831490   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:17.855828   23124 logs.go:279] 0 containers: []
	W0127 20:32:17.855844   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:17.855911   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:17.879226   23124 logs.go:279] 0 containers: []
	W0127 20:32:17.879240   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:17.879248   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:17.879255   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:17.936754   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:17.936766   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:17.936774   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:17.953719   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:17.953748   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:20.004407   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050639301s)
	I0127 20:32:20.004523   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:20.004532   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:20.044158   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:20.044174   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:22.556754   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:22.684252   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:22.708888   23124 logs.go:279] 0 containers: []
	W0127 20:32:22.708903   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:22.708974   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:22.732824   23124 logs.go:279] 0 containers: []
	W0127 20:32:22.732838   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:22.732910   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:22.756501   23124 logs.go:279] 0 containers: []
	W0127 20:32:22.756515   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:22.756599   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:22.780187   23124 logs.go:279] 0 containers: []
	W0127 20:32:22.780202   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:22.780275   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:22.803976   23124 logs.go:279] 0 containers: []
	W0127 20:32:22.804010   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:22.804136   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:22.828093   23124 logs.go:279] 0 containers: []
	W0127 20:32:22.828107   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:22.828177   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:22.852046   23124 logs.go:279] 0 containers: []
	W0127 20:32:22.852059   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:22.852143   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:22.876364   23124 logs.go:279] 0 containers: []
	W0127 20:32:22.876378   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:22.876389   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:22.876397   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:22.916872   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:22.916888   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:22.929827   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:22.929843   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:22.985498   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:22.985546   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:22.985572   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:23.003088   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:23.003101   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:25.054129   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051009737s)
	I0127 20:32:27.556592   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:27.683836   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:27.708522   23124 logs.go:279] 0 containers: []
	W0127 20:32:27.708535   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:27.708609   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:27.735458   23124 logs.go:279] 0 containers: []
	W0127 20:32:27.735474   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:27.735552   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:27.760399   23124 logs.go:279] 0 containers: []
	W0127 20:32:27.760415   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:27.760487   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:27.786142   23124 logs.go:279] 0 containers: []
	W0127 20:32:27.786157   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:27.786226   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:27.810775   23124 logs.go:279] 0 containers: []
	W0127 20:32:27.810787   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:27.810857   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:27.834786   23124 logs.go:279] 0 containers: []
	W0127 20:32:27.834801   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:27.834874   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:27.859182   23124 logs.go:279] 0 containers: []
	W0127 20:32:27.859198   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:27.859269   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:27.883041   23124 logs.go:279] 0 containers: []
	W0127 20:32:27.883056   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:27.883064   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:27.883070   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:27.923584   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:27.923600   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:27.936142   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:27.936156   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:27.993102   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:27.993115   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:27.993122   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:28.010037   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:28.010053   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:30.061438   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051366878s)
	I0127 20:32:32.562333   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:32.683950   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:32.709192   23124 logs.go:279] 0 containers: []
	W0127 20:32:32.709206   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:32.709275   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:32.733226   23124 logs.go:279] 0 containers: []
	W0127 20:32:32.733241   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:32.733314   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:32.757404   23124 logs.go:279] 0 containers: []
	W0127 20:32:32.757417   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:32.757486   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:32.781649   23124 logs.go:279] 0 containers: []
	W0127 20:32:32.781664   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:32.781735   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:32.805377   23124 logs.go:279] 0 containers: []
	W0127 20:32:32.805392   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:32.805464   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:32.829304   23124 logs.go:279] 0 containers: []
	W0127 20:32:32.829318   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:32.829389   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:32.853404   23124 logs.go:279] 0 containers: []
	W0127 20:32:32.853419   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:32.853487   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:32.877474   23124 logs.go:279] 0 containers: []
	W0127 20:32:32.877488   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:32.877495   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:32.877503   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:32.932906   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:32.932942   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:32.932948   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:32.949143   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:32.949157   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:35.001020   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051843348s)
	I0127 20:32:35.001128   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:35.001136   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:35.040322   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:35.040336   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:37.553759   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:37.683958   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:37.709614   23124 logs.go:279] 0 containers: []
	W0127 20:32:37.709629   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:37.709702   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:37.733273   23124 logs.go:279] 0 containers: []
	W0127 20:32:37.733288   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:37.733364   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:37.757325   23124 logs.go:279] 0 containers: []
	W0127 20:32:37.757339   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:37.757408   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:37.781266   23124 logs.go:279] 0 containers: []
	W0127 20:32:37.781280   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:37.781351   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:37.804094   23124 logs.go:279] 0 containers: []
	W0127 20:32:37.804109   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:37.804178   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:37.828421   23124 logs.go:279] 0 containers: []
	W0127 20:32:37.828434   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:37.828504   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:37.853179   23124 logs.go:279] 0 containers: []
	W0127 20:32:37.853193   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:37.853264   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:37.876779   23124 logs.go:279] 0 containers: []
	W0127 20:32:37.876793   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:37.876800   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:37.876807   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:37.917468   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:37.917484   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:37.930005   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:37.930021   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:37.985722   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:37.985736   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:37.985743   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:38.002376   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:38.002388   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:40.052674   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050268272s)
	I0127 20:32:42.554563   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:42.684906   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:42.710573   23124 logs.go:279] 0 containers: []
	W0127 20:32:42.710588   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:42.710697   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:42.735183   23124 logs.go:279] 0 containers: []
	W0127 20:32:42.735198   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:42.735274   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:42.761518   23124 logs.go:279] 0 containers: []
	W0127 20:32:42.761535   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:42.761611   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:42.786617   23124 logs.go:279] 0 containers: []
	W0127 20:32:42.786630   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:42.786719   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:42.811472   23124 logs.go:279] 0 containers: []
	W0127 20:32:42.811490   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:42.811590   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:42.855487   23124 logs.go:279] 0 containers: []
	W0127 20:32:42.855510   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:42.855620   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:42.881149   23124 logs.go:279] 0 containers: []
	W0127 20:32:42.881163   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:42.881235   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:42.905395   23124 logs.go:279] 0 containers: []
	W0127 20:32:42.905411   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:42.905418   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:42.905426   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:42.945980   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:42.945995   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:42.958576   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:42.958620   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:43.015612   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:43.015624   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:43.015631   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:43.031649   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:43.031663   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:45.081577   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049895748s)
	I0127 20:32:47.581940   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:47.684875   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:47.710838   23124 logs.go:279] 0 containers: []
	W0127 20:32:47.710851   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:47.710949   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:47.734343   23124 logs.go:279] 0 containers: []
	W0127 20:32:47.734358   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:47.734427   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:47.758530   23124 logs.go:279] 0 containers: []
	W0127 20:32:47.758544   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:47.758612   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:47.783561   23124 logs.go:279] 0 containers: []
	W0127 20:32:47.783582   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:47.783653   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:47.807504   23124 logs.go:279] 0 containers: []
	W0127 20:32:47.807518   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:47.807586   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:47.832570   23124 logs.go:279] 0 containers: []
	W0127 20:32:47.832608   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:47.832724   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:47.855764   23124 logs.go:279] 0 containers: []
	W0127 20:32:47.855779   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:47.855848   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:47.878444   23124 logs.go:279] 0 containers: []
	W0127 20:32:47.878457   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:47.878464   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:47.878471   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:49.930098   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05160775s)
	I0127 20:32:49.930205   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:49.930212   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:49.968591   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:49.968605   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:49.981387   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:49.981437   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:50.038278   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:50.038290   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:50.038297   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:52.554083   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:52.684099   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:52.710002   23124 logs.go:279] 0 containers: []
	W0127 20:32:52.710017   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:52.710086   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:52.733165   23124 logs.go:279] 0 containers: []
	W0127 20:32:52.733178   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:52.733248   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:52.758466   23124 logs.go:279] 0 containers: []
	W0127 20:32:52.758480   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:52.758554   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:52.780970   23124 logs.go:279] 0 containers: []
	W0127 20:32:52.780984   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:52.781058   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:52.805771   23124 logs.go:279] 0 containers: []
	W0127 20:32:52.805785   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:52.805862   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:52.829605   23124 logs.go:279] 0 containers: []
	W0127 20:32:52.829620   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:52.829701   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:52.854071   23124 logs.go:279] 0 containers: []
	W0127 20:32:52.854084   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:52.854153   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:52.878000   23124 logs.go:279] 0 containers: []
	W0127 20:32:52.878034   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:52.878042   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:52.878050   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:52.918964   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:52.918979   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:52.931888   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:52.931902   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:52.987699   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:52.987713   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:52.987721   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:53.005005   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:53.005019   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:32:55.055880   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050842814s)
	I0127 20:32:57.556786   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:32:57.685357   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:32:57.711041   23124 logs.go:279] 0 containers: []
	W0127 20:32:57.711057   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:32:57.711129   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:32:57.737249   23124 logs.go:279] 0 containers: []
	W0127 20:32:57.737264   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:32:57.737340   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:32:57.767150   23124 logs.go:279] 0 containers: []
	W0127 20:32:57.767166   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:32:57.767239   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:32:57.792427   23124 logs.go:279] 0 containers: []
	W0127 20:32:57.792446   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:32:57.792521   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:32:57.817350   23124 logs.go:279] 0 containers: []
	W0127 20:32:57.817366   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:32:57.817441   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:32:57.864320   23124 logs.go:279] 0 containers: []
	W0127 20:32:57.864334   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:32:57.864410   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:32:57.888708   23124 logs.go:279] 0 containers: []
	W0127 20:32:57.888723   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:32:57.888795   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:32:57.912287   23124 logs.go:279] 0 containers: []
	W0127 20:32:57.912301   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:32:57.912308   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:32:57.912319   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:32:57.953373   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:32:57.953387   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:32:57.965826   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:32:57.965840   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:32:58.022400   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:32:58.022411   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:32:58.022419   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:32:58.039138   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:32:58.039154   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:00.089261   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050073921s)
	I0127 20:33:02.590576   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:02.684494   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:02.710562   23124 logs.go:279] 0 containers: []
	W0127 20:33:02.710575   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:02.710642   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:02.733881   23124 logs.go:279] 0 containers: []
	W0127 20:33:02.733895   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:02.733966   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:02.756331   23124 logs.go:279] 0 containers: []
	W0127 20:33:02.756346   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:02.756417   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:02.779177   23124 logs.go:279] 0 containers: []
	W0127 20:33:02.779191   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:02.779260   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:02.803342   23124 logs.go:279] 0 containers: []
	W0127 20:33:02.803355   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:02.803425   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:02.827047   23124 logs.go:279] 0 containers: []
	W0127 20:33:02.827066   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:02.827159   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:02.850144   23124 logs.go:279] 0 containers: []
	W0127 20:33:02.850159   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:02.850238   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:02.874891   23124 logs.go:279] 0 containers: []
	W0127 20:33:02.874904   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:02.874910   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:02.874919   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:02.916427   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:02.916441   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:02.928805   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:02.928820   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:02.986239   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:02.986253   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:02.986260   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:03.003408   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:03.003423   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:05.056944   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053502264s)
	I0127 20:33:07.557928   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:07.683887   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:07.709771   23124 logs.go:279] 0 containers: []
	W0127 20:33:07.709786   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:07.709873   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:07.733303   23124 logs.go:279] 0 containers: []
	W0127 20:33:07.733317   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:07.733396   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:07.756656   23124 logs.go:279] 0 containers: []
	W0127 20:33:07.756668   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:07.756738   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:07.781556   23124 logs.go:279] 0 containers: []
	W0127 20:33:07.781570   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:07.781647   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:07.813765   23124 logs.go:279] 0 containers: []
	W0127 20:33:07.813782   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:07.813869   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:07.841389   23124 logs.go:279] 0 containers: []
	W0127 20:33:07.841402   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:07.841473   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:07.865456   23124 logs.go:279] 0 containers: []
	W0127 20:33:07.865471   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:07.865544   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:07.888900   23124 logs.go:279] 0 containers: []
	W0127 20:33:07.888915   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:07.888923   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:07.888931   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:07.929324   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:07.929339   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:07.942307   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:07.942358   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:08.000554   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:08.000567   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:08.000575   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:08.016914   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:08.016927   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:10.065674   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048728705s)
	I0127 20:33:12.566696   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:12.684458   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:12.709213   23124 logs.go:279] 0 containers: []
	W0127 20:33:12.709228   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:12.709298   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:12.732758   23124 logs.go:279] 0 containers: []
	W0127 20:33:12.732779   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:12.732864   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:12.758348   23124 logs.go:279] 0 containers: []
	W0127 20:33:12.758365   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:12.758436   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:12.784478   23124 logs.go:279] 0 containers: []
	W0127 20:33:12.784492   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:12.784619   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:12.810665   23124 logs.go:279] 0 containers: []
	W0127 20:33:12.810681   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:12.810765   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:12.860804   23124 logs.go:279] 0 containers: []
	W0127 20:33:12.860818   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:12.860889   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:12.883879   23124 logs.go:279] 0 containers: []
	W0127 20:33:12.883893   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:12.883964   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:12.908659   23124 logs.go:279] 0 containers: []
	W0127 20:33:12.908675   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:12.908682   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:12.908690   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:12.921282   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:12.921296   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:12.979076   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:12.979102   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:12.979129   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:12.995252   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:12.995267   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:15.044349   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049063995s)
	I0127 20:33:15.044457   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:15.044464   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:17.584395   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:17.684627   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:17.711133   23124 logs.go:279] 0 containers: []
	W0127 20:33:17.711147   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:17.711214   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:17.735421   23124 logs.go:279] 0 containers: []
	W0127 20:33:17.735436   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:17.735509   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:17.758828   23124 logs.go:279] 0 containers: []
	W0127 20:33:17.758841   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:17.758914   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:17.783699   23124 logs.go:279] 0 containers: []
	W0127 20:33:17.783716   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:17.783802   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:17.808532   23124 logs.go:279] 0 containers: []
	W0127 20:33:17.808545   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:17.808617   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:17.833387   23124 logs.go:279] 0 containers: []
	W0127 20:33:17.833401   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:17.833475   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:17.857906   23124 logs.go:279] 0 containers: []
	W0127 20:33:17.857920   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:17.858006   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:17.881402   23124 logs.go:279] 0 containers: []
	W0127 20:33:17.881416   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:17.881423   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:17.881429   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:17.897609   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:17.897624   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:19.947192   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049550079s)
	I0127 20:33:19.947304   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:19.947312   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:19.986493   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:19.986506   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:19.998775   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:19.998791   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:20.055974   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:22.556345   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:22.683876   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:22.708983   23124 logs.go:279] 0 containers: []
	W0127 20:33:22.708997   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:22.709071   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:22.733373   23124 logs.go:279] 0 containers: []
	W0127 20:33:22.733387   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:22.733458   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:22.755977   23124 logs.go:279] 0 containers: []
	W0127 20:33:22.755992   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:22.756067   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:22.780968   23124 logs.go:279] 0 containers: []
	W0127 20:33:22.780981   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:22.781049   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:22.804186   23124 logs.go:279] 0 containers: []
	W0127 20:33:22.804201   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:22.804288   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:22.829181   23124 logs.go:279] 0 containers: []
	W0127 20:33:22.829196   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:22.829269   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:22.852685   23124 logs.go:279] 0 containers: []
	W0127 20:33:22.852699   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:22.852769   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:22.876241   23124 logs.go:279] 0 containers: []
	W0127 20:33:22.876255   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:22.876262   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:22.876269   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:22.936059   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:22.936070   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:22.936077   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:22.952273   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:22.952287   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:25.001841   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049536093s)
	I0127 20:33:25.001949   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:25.001956   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:25.041412   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:25.041429   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:27.556544   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:27.684218   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:27.709417   23124 logs.go:279] 0 containers: []
	W0127 20:33:27.709432   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:27.709505   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:27.734442   23124 logs.go:279] 0 containers: []
	W0127 20:33:27.734456   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:27.734528   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:27.762234   23124 logs.go:279] 0 containers: []
	W0127 20:33:27.762247   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:27.762319   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:27.787566   23124 logs.go:279] 0 containers: []
	W0127 20:33:27.787579   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:27.787651   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:27.813409   23124 logs.go:279] 0 containers: []
	W0127 20:33:27.813423   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:27.813495   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:27.856975   23124 logs.go:279] 0 containers: []
	W0127 20:33:27.856989   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:27.857057   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:27.880989   23124 logs.go:279] 0 containers: []
	W0127 20:33:27.881004   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:27.881072   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:27.904551   23124 logs.go:279] 0 containers: []
	W0127 20:33:27.904565   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:27.904587   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:27.904593   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:27.946477   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:27.946493   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:27.959105   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:27.959119   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:28.015828   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:28.015849   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:28.015858   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:28.033143   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:28.033158   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:30.083166   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049990439s)
	I0127 20:33:32.585547   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:32.685007   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:32.710961   23124 logs.go:279] 0 containers: []
	W0127 20:33:32.710976   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:32.711050   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:32.735931   23124 logs.go:279] 0 containers: []
	W0127 20:33:32.735945   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:32.736017   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:32.760010   23124 logs.go:279] 0 containers: []
	W0127 20:33:32.760025   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:32.760095   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:32.783461   23124 logs.go:279] 0 containers: []
	W0127 20:33:32.783476   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:32.783549   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:32.807370   23124 logs.go:279] 0 containers: []
	W0127 20:33:32.807383   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:32.807451   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:32.831737   23124 logs.go:279] 0 containers: []
	W0127 20:33:32.831751   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:32.831834   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:32.854592   23124 logs.go:279] 0 containers: []
	W0127 20:33:32.854605   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:32.854676   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:32.878598   23124 logs.go:279] 0 containers: []
	W0127 20:33:32.878612   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:32.878620   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:32.878628   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:32.936965   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:32.936983   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:32.936989   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:32.953264   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:32.953277   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:35.003921   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050624687s)
	I0127 20:33:35.004044   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:35.004051   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:35.043305   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:35.043320   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:37.555798   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:37.683961   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:37.709219   23124 logs.go:279] 0 containers: []
	W0127 20:33:37.709234   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:37.709332   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:37.732421   23124 logs.go:279] 0 containers: []
	W0127 20:33:37.732434   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:37.732503   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:37.757385   23124 logs.go:279] 0 containers: []
	W0127 20:33:37.757401   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:37.757474   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:37.783362   23124 logs.go:279] 0 containers: []
	W0127 20:33:37.783376   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:37.783445   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:37.807466   23124 logs.go:279] 0 containers: []
	W0127 20:33:37.807480   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:37.807580   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:37.832823   23124 logs.go:279] 0 containers: []
	W0127 20:33:37.832838   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:37.832910   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:37.857959   23124 logs.go:279] 0 containers: []
	W0127 20:33:37.857974   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:37.858048   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:37.881885   23124 logs.go:279] 0 containers: []
	W0127 20:33:37.881900   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:37.881907   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:37.881914   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:37.923245   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:37.923261   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:37.935387   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:37.935399   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:37.991413   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:37.991427   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:37.991434   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:38.008003   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:38.008016   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:40.060019   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051984242s)
	I0127 20:33:42.562388   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:42.684521   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:42.710189   23124 logs.go:279] 0 containers: []
	W0127 20:33:42.710203   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:42.710275   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:42.736411   23124 logs.go:279] 0 containers: []
	W0127 20:33:42.736425   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:42.736501   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:42.761953   23124 logs.go:279] 0 containers: []
	W0127 20:33:42.761968   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:42.762042   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:42.786635   23124 logs.go:279] 0 containers: []
	W0127 20:33:42.786650   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:42.786721   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:42.812439   23124 logs.go:279] 0 containers: []
	W0127 20:33:42.812455   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:42.812535   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:42.836978   23124 logs.go:279] 0 containers: []
	W0127 20:33:42.836990   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:42.837065   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:42.875953   23124 logs.go:279] 0 containers: []
	W0127 20:33:42.875966   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:42.876036   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:42.899957   23124 logs.go:279] 0 containers: []
	W0127 20:33:42.899971   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:42.899977   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:42.899984   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:42.912143   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:42.912200   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:42.969177   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:42.969190   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:42.969197   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:42.985457   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:42.985473   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:45.036126   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05063405s)
	I0127 20:33:45.036241   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:45.036248   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:47.574551   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:47.685134   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:33:47.711079   23124 logs.go:279] 0 containers: []
	W0127 20:33:47.711094   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:33:47.711165   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:33:47.735883   23124 logs.go:279] 0 containers: []
	W0127 20:33:47.735901   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:33:47.735989   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:33:47.759054   23124 logs.go:279] 0 containers: []
	W0127 20:33:47.759068   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:33:47.759140   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:33:47.782760   23124 logs.go:279] 0 containers: []
	W0127 20:33:47.782775   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:33:47.782845   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:33:47.807371   23124 logs.go:279] 0 containers: []
	W0127 20:33:47.807400   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:33:47.807471   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:33:47.831782   23124 logs.go:279] 0 containers: []
	W0127 20:33:47.831795   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:33:47.831866   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:33:47.855302   23124 logs.go:279] 0 containers: []
	W0127 20:33:47.855316   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:33:47.855388   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:33:47.881254   23124 logs.go:279] 0 containers: []
	W0127 20:33:47.881268   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:33:47.881293   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:33:47.881303   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:33:47.898295   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:33:47.898310   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:33:49.947322   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048994342s)
	I0127 20:33:49.947432   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:33:49.947439   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:33:49.989493   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:33:49.989508   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:33:50.002183   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:33:50.002230   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:33:50.057781   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:33:52.558290   23124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:33:52.684024   23124 kubeadm.go:637] restartCluster took 4m11.248383541s
	W0127 20:33:52.684170   23124 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0127 20:33:52.684204   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0127 20:33:53.106409   23124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:33:53.117411   23124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 20:33:53.125947   23124 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0127 20:33:53.126001   23124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:33:53.133937   23124 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 20:33:53.133971   23124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 20:33:53.184465   23124 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0127 20:33:53.184539   23124 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 20:33:53.489040   23124 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 20:33:53.489129   23124 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 20:33:53.489208   23124 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 20:33:53.722029   23124 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 20:33:53.724001   23124 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 20:33:53.731362   23124 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0127 20:33:53.794793   23124 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 20:33:53.816281   23124 out.go:204]   - Generating certificates and keys ...
	I0127 20:33:53.816369   23124 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 20:33:53.816467   23124 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 20:33:53.816561   23124 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 20:33:53.816633   23124 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0127 20:33:53.816695   23124 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 20:33:53.816785   23124 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0127 20:33:53.816857   23124 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0127 20:33:53.816908   23124 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0127 20:33:53.816975   23124 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 20:33:53.817032   23124 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 20:33:53.817062   23124 kubeadm.go:322] [certs] Using the existing "sa" key
	I0127 20:33:53.817110   23124 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 20:33:54.069101   23124 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 20:33:54.233178   23124 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 20:33:54.557173   23124 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 20:33:54.694553   23124 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 20:33:54.695138   23124 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 20:33:54.716825   23124 out.go:204]   - Booting up control plane ...
	I0127 20:33:54.716955   23124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 20:33:54.717059   23124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 20:33:54.717147   23124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 20:33:54.717236   23124 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 20:33:54.717407   23124 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 20:34:34.704957   23124 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0127 20:34:34.705694   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:34:34.705917   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:34:39.707026   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:34:39.707274   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:34:49.708390   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:34:49.708612   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:35:09.709159   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:35:09.709310   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:35:49.711443   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:35:49.711664   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:35:49.711679   23124 kubeadm.go:322] 
	I0127 20:35:49.711719   23124 kubeadm.go:322] Unfortunately, an error has occurred:
	I0127 20:35:49.711759   23124 kubeadm.go:322] 	timed out waiting for the condition
	I0127 20:35:49.711770   23124 kubeadm.go:322] 
	I0127 20:35:49.711817   23124 kubeadm.go:322] This error is likely caused by:
	I0127 20:35:49.711893   23124 kubeadm.go:322] 	- The kubelet is not running
	I0127 20:35:49.712046   23124 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 20:35:49.712062   23124 kubeadm.go:322] 
	I0127 20:35:49.712182   23124 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 20:35:49.712220   23124 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0127 20:35:49.712262   23124 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0127 20:35:49.712273   23124 kubeadm.go:322] 
	I0127 20:35:49.712405   23124 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 20:35:49.712515   23124 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0127 20:35:49.712610   23124 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0127 20:35:49.712669   23124 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0127 20:35:49.712748   23124 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0127 20:35:49.712783   23124 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0127 20:35:49.715029   23124 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0127 20:35:49.715102   23124 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0127 20:35:49.715217   23124 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0127 20:35:49.715312   23124 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 20:35:49.715382   23124 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 20:35:49.715443   23124 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0127 20:35:49.715586   23124 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0127 20:35:49.715615   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0127 20:35:50.128868   23124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:35:50.138929   23124 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0127 20:35:50.138987   23124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:35:50.146527   23124 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 20:35:50.146547   23124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0127 20:35:50.194696   23124 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0127 20:35:50.195649   23124 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 20:35:50.503778   23124 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 20:35:50.503870   23124 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 20:35:50.503964   23124 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 20:35:50.735763   23124 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 20:35:50.737040   23124 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 20:35:50.743814   23124 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0127 20:35:50.815257   23124 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 20:35:50.837078   23124 out.go:204]   - Generating certificates and keys ...
	I0127 20:35:50.837174   23124 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 20:35:50.837241   23124 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 20:35:50.837326   23124 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 20:35:50.837403   23124 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0127 20:35:50.837505   23124 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0127 20:35:50.837554   23124 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0127 20:35:50.837609   23124 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0127 20:35:50.837662   23124 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0127 20:35:50.837748   23124 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 20:35:50.837862   23124 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 20:35:50.837941   23124 kubeadm.go:322] [certs] Using the existing "sa" key
	I0127 20:35:50.838048   23124 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 20:35:50.928888   23124 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 20:35:51.022130   23124 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 20:35:51.294740   23124 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 20:35:51.520806   23124 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 20:35:51.521345   23124 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 20:35:51.542943   23124 out.go:204]   - Booting up control plane ...
	I0127 20:35:51.543162   23124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 20:35:51.543289   23124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 20:35:51.543405   23124 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 20:35:51.543563   23124 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 20:35:51.543835   23124 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 20:36:31.530609   23124 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0127 20:36:31.531787   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:36:31.532025   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:36:36.533468   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:36:36.533689   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:36:46.534292   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:36:46.534497   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:37:06.535082   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:37:06.535362   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:37:46.537409   23124 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0127 20:37:46.537679   23124 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0127 20:37:46.537695   23124 kubeadm.go:322] 
	I0127 20:37:46.537743   23124 kubeadm.go:322] Unfortunately, an error has occurred:
	I0127 20:37:46.537785   23124 kubeadm.go:322] 	timed out waiting for the condition
	I0127 20:37:46.537790   23124 kubeadm.go:322] 
	I0127 20:37:46.537838   23124 kubeadm.go:322] This error is likely caused by:
	I0127 20:37:46.537883   23124 kubeadm.go:322] 	- The kubelet is not running
	I0127 20:37:46.538007   23124 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0127 20:37:46.538019   23124 kubeadm.go:322] 
	I0127 20:37:46.538163   23124 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0127 20:37:46.538202   23124 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0127 20:37:46.538234   23124 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0127 20:37:46.538242   23124 kubeadm.go:322] 
	I0127 20:37:46.538366   23124 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0127 20:37:46.538460   23124 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0127 20:37:46.538577   23124 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0127 20:37:46.538629   23124 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0127 20:37:46.538727   23124 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0127 20:37:46.538775   23124 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0127 20:37:46.541087   23124 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0127 20:37:46.541153   23124 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0127 20:37:46.541262   23124 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
	I0127 20:37:46.541369   23124 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 20:37:46.541438   23124 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0127 20:37:46.541520   23124 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0127 20:37:46.541547   23124 kubeadm.go:403] StartCluster complete in 8m5.136563851s
	I0127 20:37:46.541636   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 20:37:46.564851   23124 logs.go:279] 0 containers: []
	W0127 20:37:46.564863   23124 logs.go:281] No container was found matching "kube-apiserver"
	I0127 20:37:46.564932   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 20:37:46.587977   23124 logs.go:279] 0 containers: []
	W0127 20:37:46.587991   23124 logs.go:281] No container was found matching "etcd"
	I0127 20:37:46.588058   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 20:37:46.612875   23124 logs.go:279] 0 containers: []
	W0127 20:37:46.612890   23124 logs.go:281] No container was found matching "coredns"
	I0127 20:37:46.612959   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 20:37:46.637196   23124 logs.go:279] 0 containers: []
	W0127 20:37:46.637210   23124 logs.go:281] No container was found matching "kube-scheduler"
	I0127 20:37:46.637282   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 20:37:46.660648   23124 logs.go:279] 0 containers: []
	W0127 20:37:46.660661   23124 logs.go:281] No container was found matching "kube-proxy"
	I0127 20:37:46.660731   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0127 20:37:46.683792   23124 logs.go:279] 0 containers: []
	W0127 20:37:46.683806   23124 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0127 20:37:46.683878   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0127 20:37:46.708860   23124 logs.go:279] 0 containers: []
	W0127 20:37:46.708873   23124 logs.go:281] No container was found matching "storage-provisioner"
	I0127 20:37:46.708944   23124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 20:37:46.732733   23124 logs.go:279] 0 containers: []
	W0127 20:37:46.732750   23124 logs.go:281] No container was found matching "kube-controller-manager"
	I0127 20:37:46.732757   23124 logs.go:124] Gathering logs for kubelet ...
	I0127 20:37:46.732764   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 20:37:46.773535   23124 logs.go:124] Gathering logs for dmesg ...
	I0127 20:37:46.773550   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 20:37:46.785835   23124 logs.go:124] Gathering logs for describe nodes ...
	I0127 20:37:46.785884   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0127 20:37:46.843319   23124 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0127 20:37:46.843329   23124 logs.go:124] Gathering logs for Docker ...
	I0127 20:37:46.843336   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0127 20:37:46.859499   23124 logs.go:124] Gathering logs for container status ...
	I0127 20:37:46.859516   23124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 20:37:48.912958   23124 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053424324s)
	W0127 20:37:48.913067   23124 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0127 20:37:48.913089   23124 out.go:239] * 
	* 
	W0127 20:37:48.913198   23124 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 20:37:48.913212   23124 out.go:239] * 
	* 
	W0127 20:37:48.913857   23124 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 20:37:49.009483   23124 out.go:177] 
	W0127 20:37:49.051558   23124 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.22. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0127 20:37:49.051647   23124 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0127 20:37:49.051722   23124 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0127 20:37:49.073587   23124 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-720000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-720000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-720000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c",
	        "Created": "2023-01-28T04:23:59.460794108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:29:37.268618275Z",
	            "FinishedAt": "2023-01-28T04:29:34.322441553Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hosts",
	        "LogPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c-json.log",
	        "Name": "/old-k8s-version-720000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-720000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-720000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-720000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-720000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-720000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4e73c310713a434f67e331eb1506dcd28ad63819145d6730622dc2dba50031a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55385"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55386"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55387"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55388"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e4e73c310713",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-720000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a7d076a4985",
	                        "old-k8s-version-720000"
	                    ],
	                    "NetworkID": "4a101da36ff964d86adf1945f3a9a22581d700864206dd1558c9c4957ae7df32",
	                    "EndpointID": "aeeb4aec150b1b8c52d1657c0698d8a1137e548645886cdfac6e0cdeda3a86c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 2 (722.754889ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-720000 logs -n 25
E0127 20:37:51.234957    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-720000 logs -n 25: (3.783718928s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-259000 sudo                            | kubenet-259000         | jenkins | v1.28.0 | 27 Jan 23 20:25 PST | 27 Jan 23 20:25 PST |
	|         | containerd config dump                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-259000 sudo                            | kubenet-259000         | jenkins | v1.28.0 | 27 Jan 23 20:25 PST |                     |
	|         | systemctl status crio --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-259000 sudo                            | kubenet-259000         | jenkins | v1.28.0 | 27 Jan 23 20:25 PST | 27 Jan 23 20:25 PST |
	|         | systemctl cat crio --no-pager                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-259000 sudo find                       | kubenet-259000         | jenkins | v1.28.0 | 27 Jan 23 20:25 PST | 27 Jan 23 20:25 PST |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-259000 sudo crio                       | kubenet-259000         | jenkins | v1.28.0 | 27 Jan 23 20:25 PST | 27 Jan 23 20:25 PST |
	|         | config                                            |                        |         |         |                     |                     |
	| delete  | -p kubenet-259000                                 | kubenet-259000         | jenkins | v1.28.0 | 27 Jan 23 20:25 PST | 27 Jan 23 20:25 PST |
	| start   | -p no-preload-711000                              | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:25 PST | 27 Jan 23 20:26 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-711000        | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:26 PST | 27 Jan 23 20:26 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p no-preload-711000                              | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:26 PST | 27 Jan 23 20:26 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-711000             | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:26 PST | 27 Jan 23 20:26 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-711000                              | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:26 PST | 27 Jan 23 20:36 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr                                 |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720000   | old-k8s-version-720000 | jenkins | v1.28.0 | 27 Jan 23 20:28 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-720000                         | old-k8s-version-720000 | jenkins | v1.28.0 | 27 Jan 23 20:29 PST | 27 Jan 23 20:29 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720000        | old-k8s-version-720000 | jenkins | v1.28.0 | 27 Jan 23 20:29 PST | 27 Jan 23 20:29 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-720000                         | old-k8s-version-720000 | jenkins | v1.28.0 | 27 Jan 23 20:29 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --kvm-network=default                             |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |         |                     |                     |
	|         | --keep-context=false                              |                        |         |         |                     |                     |
	|         | --driver=docker                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |         |                     |                     |
	| ssh     | -p no-preload-711000 sudo                         | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	|         | crictl images -o json                             |                        |         |         |                     |                     |
	| pause   | -p no-preload-711000                              | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| unpause | -p no-preload-711000                              | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	|         | --alsologtostderr -v=1                            |                        |         |         |                     |                     |
	| delete  | -p no-preload-711000                              | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	| delete  | -p no-preload-711000                              | no-preload-711000      | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	| start   | -p embed-certs-216000                             | embed-certs-216000     | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:37 PST |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-216000       | embed-certs-216000     | jenkins | v1.28.0 | 27 Jan 23 20:37 PST | 27 Jan 23 20:37 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |         |                     |                     |
	| stop    | -p embed-certs-216000                             | embed-certs-216000     | jenkins | v1.28.0 | 27 Jan 23 20:37 PST | 27 Jan 23 20:37 PST |
	|         | --alsologtostderr -v=3                            |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-216000            | embed-certs-216000     | jenkins | v1.28.0 | 27 Jan 23 20:37 PST | 27 Jan 23 20:37 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-216000                             | embed-certs-216000     | jenkins | v1.28.0 | 27 Jan 23 20:37 PST |                     |
	|         | --memory=2200                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 20:37:49
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 20:37:49.522146   23956 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:37:49.522315   23956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:37:49.522325   23956 out.go:309] Setting ErrFile to fd 2...
	I0127 20:37:49.522338   23956 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:37:49.522453   23956 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 20:37:49.522966   23956 out.go:303] Setting JSON to false
	I0127 20:37:49.543813   23956 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":5843,"bootTime":1674874826,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 20:37:49.543905   23956 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:37:49.565280   23956 out.go:177] * [embed-certs-216000] minikube v1.28.0 on Darwin 13.2
	I0127 20:37:49.607091   23956 notify.go:220] Checking for updates...
	I0127 20:37:49.627780   23956 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:37:49.670009   23956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:37:49.711972   23956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:37:49.753861   23956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:37:49.795849   23956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 20:37:49.837967   23956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:37:49.861577   23956 config.go:180] Loaded profile config "embed-certs-216000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:37:49.862319   23956 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:37:49.930867   23956 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 20:37:49.931012   23956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:37:50.095327   23956 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 04:37:49.985143905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:37:50.116869   23956 out.go:177] * Using the docker driver based on existing profile
	I0127 20:37:50.137819   23956 start.go:296] selected driver: docker
	I0127 20:37:50.137831   23956 start.go:840] validating driver "docker" against &{Name:embed-certs-216000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-216000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:37:50.137930   23956 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:37:50.140563   23956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:37:50.347008   23956 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 04:37:50.198102637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:37:50.347904   23956 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 20:37:50.347959   23956 cni.go:84] Creating CNI manager for ""
	I0127 20:37:50.347991   23956 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:37:50.348015   23956 start_flags.go:319] config:
	{Name:embed-certs-216000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-216000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:37:50.396893   23956 out.go:177] * Starting control plane node embed-certs-216000 in cluster embed-certs-216000
	I0127 20:37:50.439184   23956 cache.go:120] Beginning downloading kic base image for docker with docker
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 04:29:37 UTC, end at Sat 2023-01-28 04:37:51 UTC. --
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.390742681Z" level=info msg="Processing signal 'terminated'"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.391689237Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.391849511Z" level=info msg="Daemon shutdown complete"
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: docker.service: Succeeded.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.440920078Z" level=info msg="Starting up"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442625226Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442662033Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442682069Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442698479Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444251405Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444290828Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444304767Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444311848Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.450641942Z" level=info msg="Loading containers: start."
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.531616300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.563882138Z" level=info msg="Loading containers: done."
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.572136805Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.572200080Z" level=info msg="Daemon has completed initialization"
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.593651432Z" level=info msg="API listen on [::]:2376"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.600920617Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-28T04:37:53Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  04:37:53 up  1:37,  0 users,  load average: 1.01, 0.91, 1.24
	Linux old-k8s-version-720000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 04:29:37 UTC, end at Sat 2023-01-28 04:37:53 UTC. --
	Jan 28 04:37:52 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 04:37:52 old-k8s-version-720000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jan 28 04:37:52 old-k8s-version-720000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 04:37:52 old-k8s-version-720000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14820]: I0128 04:37:53.060980   14820 server.go:410] Version: v1.16.0
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14820]: I0128 04:37:53.061330   14820 plugins.go:100] No cloud provider specified.
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14820]: I0128 04:37:53.061368   14820 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14820]: I0128 04:37:53.063161   14820 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14820]: W0128 04:37:53.063917   14820 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14820]: W0128 04:37:53.063991   14820 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14820]: F0128 04:37:53.064016   14820 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 04:37:53 old-k8s-version-720000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 04:37:53 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 04:37:53 old-k8s-version-720000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 163.
	Jan 28 04:37:53 old-k8s-version-720000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 04:37:53 old-k8s-version-720000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14857]: I0128 04:37:53.798015   14857 server.go:410] Version: v1.16.0
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14857]: I0128 04:37:53.798205   14857 plugins.go:100] No cloud provider specified.
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14857]: I0128 04:37:53.798216   14857 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14857]: I0128 04:37:53.800153   14857 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14857]: W0128 04:37:53.800923   14857 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14857]: W0128 04:37:53.800999   14857 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 04:37:53 old-k8s-version-720000 kubelet[14857]: F0128 04:37:53.801024   14857 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 04:37:53 old-k8s-version-720000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 04:37:53 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:37:53.546290   23981 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 2 (444.606902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-720000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (498.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:38:09.157782    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:38:09.211950    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:38:25.379162    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:38:44.729798    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:39:14.280155    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:39:38.900364    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:39:39.591527    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:39:50.112161    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:39:50.399824    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:40:56.629417    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:41:13.156885    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:41:13.453260    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:41:17.751686    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:17.756744    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:17.767414    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:17.787732    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:17.828245    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:17.910205    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:18.070354    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:18.390632    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:19.032865    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:20.345121    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:41:22.907107    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:28.027500    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:41:28.876101    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:41:30.206074    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:41:38.267814    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:41:58.748080    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:42:39.708376    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
E0127 20:42:42.633610    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:42:51.237616    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:42:51.988321    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:42:53.259976    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:43:09.159973    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:43:09.211433    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:43:25.380291    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:43:44.731466    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:44:01.629378    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:44:32.211636    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:44:32.257823    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
E0127 20:44:38.900468    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:44:39.592149    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:44:48.439326    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 20:44:50.112873    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:44:50.402030    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:45:56.634163    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:46:01.995128    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:46:17.762070    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:46:28.884997    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:46:30.215834    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:46:45.479284    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 2 (522.273928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-720000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-720000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-720000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c",
	        "Created": "2023-01-28T04:23:59.460794108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:29:37.268618275Z",
	            "FinishedAt": "2023-01-28T04:29:34.322441553Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hosts",
	        "LogPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c-json.log",
	        "Name": "/old-k8s-version-720000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-720000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-720000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-720000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-720000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-720000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4e73c310713a434f67e331eb1506dcd28ad63819145d6730622dc2dba50031a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55385"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55386"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55387"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55388"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e4e73c310713",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-720000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a7d076a4985",
	                        "old-k8s-version-720000"
	                    ],
	                    "NetworkID": "4a101da36ff964d86adf1945f3a9a22581d700864206dd1558c9c4957ae7df32",
	                    "EndpointID": "aeeb4aec150b1b8c52d1657c0698d8a1137e548645886cdfac6e0cdeda3a86c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 2 (445.197067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-720000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-720000 logs -n 25: (3.838109737s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-711000        | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:26 PST | 27 Jan 23 20:26 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p no-preload-711000                              | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:26 PST | 27 Jan 23 20:26 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-711000             | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:26 PST | 27 Jan 23 20:26 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-711000                              | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:26 PST | 27 Jan 23 20:36 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-720000   | old-k8s-version-720000       | jenkins | v1.28.0 | 27 Jan 23 20:28 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-720000                         | old-k8s-version-720000       | jenkins | v1.28.0 | 27 Jan 23 20:29 PST | 27 Jan 23 20:29 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-720000        | old-k8s-version-720000       | jenkins | v1.28.0 | 27 Jan 23 20:29 PST | 27 Jan 23 20:29 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-720000                         | old-k8s-version-720000       | jenkins | v1.28.0 | 27 Jan 23 20:29 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-711000 sudo                         | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-711000                              | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-711000                              | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-711000                              | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	| delete  | -p no-preload-711000                              | no-preload-711000            | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:36 PST |
	| start   | -p embed-certs-216000                             | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:36 PST | 27 Jan 23 20:37 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-216000       | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:37 PST | 27 Jan 23 20:37 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-216000                             | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:37 PST | 27 Jan 23 20:37 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-216000            | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:37 PST | 27 Jan 23 20:37 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-216000                             | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:37 PST | 27 Jan 23 20:47 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-216000 sudo                        | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-216000                             | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-216000                             | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-216000                             | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	| delete  | -p embed-certs-216000                             | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	| delete  | -p                                                | disable-driver-mounts-412000 | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	|         | disable-driver-mounts-412000                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:47 PST |                     |
	|         | default-k8s-diff-port-500000                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 20:47:23
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 20:47:23.859627   24816 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:47:23.859786   24816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:47:23.859792   24816 out.go:309] Setting ErrFile to fd 2...
	I0127 20:47:23.859796   24816 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:47:23.859895   24816 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 20:47:23.860430   24816 out.go:303] Setting JSON to false
	I0127 20:47:23.879206   24816 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6417,"bootTime":1674874826,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 20:47:23.879295   24816 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:47:23.901114   24816 out.go:177] * [default-k8s-diff-port-500000] minikube v1.28.0 on Darwin 13.2
	I0127 20:47:23.922958   24816 notify.go:220] Checking for updates...
	I0127 20:47:23.944763   24816 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:47:23.966583   24816 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:47:23.987918   24816 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:47:24.009988   24816 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:47:24.031593   24816 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 20:47:24.053945   24816 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:47:24.076432   24816 config.go:180] Loaded profile config "old-k8s-version-720000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0127 20:47:24.076514   24816 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:47:24.140914   24816 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 20:47:24.141057   24816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:47:24.287092   24816 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 04:47:24.192489352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:47:24.309295   24816 out.go:177] * Using the docker driver based on user configuration
	I0127 20:47:24.330975   24816 start.go:296] selected driver: docker
	I0127 20:47:24.331004   24816 start.go:840] validating driver "docker" against <nil>
	I0127 20:47:24.331020   24816 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:47:24.334937   24816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:47:24.481749   24816 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 04:47:24.38817196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:47:24.481850   24816 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0127 20:47:24.482008   24816 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 20:47:24.503590   24816 out.go:177] * Using Docker Desktop driver with root privileges
	I0127 20:47:24.524471   24816 cni.go:84] Creating CNI manager for ""
	I0127 20:47:24.524494   24816 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:47:24.524501   24816 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 20:47:24.524522   24816 start_flags.go:319] config:
	{Name:default-k8s-diff-port-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-500000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:47:24.566258   24816 out.go:177] * Starting control plane node default-k8s-diff-port-500000 in cluster default-k8s-diff-port-500000
	I0127 20:47:24.587430   24816 cache.go:120] Beginning downloading kic base image for docker with docker
	I0127 20:47:24.608489   24816 out.go:177] * Pulling base image ...
	I0127 20:47:24.650819   24816 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:47:24.650874   24816 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0127 20:47:24.650923   24816 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0127 20:47:24.650945   24816 cache.go:57] Caching tarball of preloaded images
	I0127 20:47:24.651271   24816 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:47:24.651297   24816 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0127 20:47:24.652524   24816 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/default-k8s-diff-port-500000/config.json ...
	I0127 20:47:24.652695   24816 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/default-k8s-diff-port-500000/config.json: {Name:mkd8cee855927e6757ca8b6cfd9aa4dbb7872579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:47:24.716560   24816 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0127 20:47:24.716575   24816 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0127 20:47:24.716645   24816 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:47:24.716723   24816 start.go:364] acquiring machines lock for default-k8s-diff-port-500000: {Name:mk7188e1f5f79e7ddd5899e474a537b7fbc7f203 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:47:24.716895   24816 start.go:368] acquired machines lock for "default-k8s-diff-port-500000" in 160.557µs
	I0127 20:47:24.716922   24816 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-500000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:47:24.716991   24816 start.go:125] createHost starting for "" (driver="docker")
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 04:29:37 UTC, end at Sat 2023-01-28 04:47:26 UTC. --
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.390742681Z" level=info msg="Processing signal 'terminated'"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.391689237Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.391849511Z" level=info msg="Daemon shutdown complete"
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: docker.service: Succeeded.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.440920078Z" level=info msg="Starting up"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442625226Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442662033Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442682069Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442698479Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444251405Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444290828Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444304767Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444311848Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.450641942Z" level=info msg="Loading containers: start."
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.531616300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.563882138Z" level=info msg="Loading containers: done."
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.572136805Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.572200080Z" level=info msg="Daemon has completed initialization"
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.593651432Z" level=info msg="API listen on [::]:2376"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.600920617Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-28T04:47:28Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  04:47:29 up  1:46,  0 users,  load average: 0.42, 0.44, 0.84
	Linux old-k8s-version-720000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 04:29:37 UTC, end at Sat 2023-01-28 04:47:29 UTC. --
	Jan 28 04:47:27 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 04:47:28 old-k8s-version-720000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Jan 28 04:47:28 old-k8s-version-720000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 04:47:28 old-k8s-version-720000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 04:47:28 old-k8s-version-720000 kubelet[25067]: I0128 04:47:28.343887   25067 server.go:410] Version: v1.16.0
	Jan 28 04:47:28 old-k8s-version-720000 kubelet[25067]: I0128 04:47:28.344201   25067 plugins.go:100] No cloud provider specified.
	Jan 28 04:47:28 old-k8s-version-720000 kubelet[25067]: I0128 04:47:28.344214   25067 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 04:47:28 old-k8s-version-720000 kubelet[25067]: I0128 04:47:28.346192   25067 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 04:47:28 old-k8s-version-720000 kubelet[25067]: W0128 04:47:28.346966   25067 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 04:47:28 old-k8s-version-720000 kubelet[25067]: W0128 04:47:28.347043   25067 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 04:47:28 old-k8s-version-720000 kubelet[25067]: F0128 04:47:28.347071   25067 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 04:47:28 old-k8s-version-720000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 04:47:28 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 04:47:28 old-k8s-version-720000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 930.
	Jan 28 04:47:28 old-k8s-version-720000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 04:47:28 old-k8s-version-720000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 04:47:29 old-k8s-version-720000 kubelet[25099]: I0128 04:47:29.115388   25099 server.go:410] Version: v1.16.0
	Jan 28 04:47:29 old-k8s-version-720000 kubelet[25099]: I0128 04:47:29.115679   25099 plugins.go:100] No cloud provider specified.
	Jan 28 04:47:29 old-k8s-version-720000 kubelet[25099]: I0128 04:47:29.115694   25099 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 04:47:29 old-k8s-version-720000 kubelet[25099]: I0128 04:47:29.118085   25099 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 04:47:29 old-k8s-version-720000 kubelet[25099]: W0128 04:47:29.119157   25099 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 04:47:29 old-k8s-version-720000 kubelet[25099]: W0128 04:47:29.119247   25099 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 04:47:29 old-k8s-version-720000 kubelet[25099]: F0128 04:47:29.119337   25099 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 04:47:29 old-k8s-version-720000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 04:47:29 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:47:28.933439   24881 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 2 (567.987645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-720000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:47:51.246839    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:48:09.169367    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:48:09.221908    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:48:25.390711    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:49:38.909813    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
E0127 20:49:39.602534    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:49:50.124769    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:49:50.410450    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:50:07.799522    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:50:56.639436    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:51:17.764363    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:51:28.886512    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:51:30.218851    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:52:51.247668    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:53:09.170055    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/enable-default-cni-259000/client.crt: no such file or directory
E0127 20:53:09.222944    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/bridge-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0127 20:53:25.389873    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:55388/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:53:59.682357    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:54:38.910782    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kubenet-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:54:39.603727    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:54:50.125444    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:54:50.412828    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:55:54.295415    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:55:56.640476    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:56:17.766063    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/no-preload-711000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:56:28.886881    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0127 20:56:30.219759    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 2 (410.973141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-720000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-720000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-720000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.909µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-720000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-720000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-720000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c",
	        "Created": "2023-01-28T04:23:59.460794108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304880,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T04:29:37.268618275Z",
	            "FinishedAt": "2023-01-28T04:29:34.322441553Z"
	        },
	        "Image": "sha256:c4f6061730f518104bba7f63d4b9eb2ccd1634c6b2943801ca33b3f1c3908566",
	        "ResolvConfPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/hosts",
	        "LogPath": "/var/lib/docker/containers/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c/7a7d076a498516c111ce76cf45095cad595fe9bdc6a8bcc5deafc4bf3ccd225c-json.log",
	        "Name": "/old-k8s-version-720000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-720000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-720000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb-init/diff:/var/lib/docker/overlay2/c98618a945a30d9da49b77c20d284b1fc9d5dd07c718be403064c7b12592fcc2/diff:/var/lib/docker/overlay2/acd2ad577a4ceef715a354a1b9ea7e57ed745eb557fea5ca8ee3cd1d85439275/diff:/var/lib/docker/overlay2/bfd2a98291f2fc5a30237c375509cfde5e7166ba0a8ae30e3ccd369fe3404b2e/diff:/var/lib/docker/overlay2/45332007b433d2510247edff31bc8b0d2e21c20238be950857d76066aaec8480/diff:/var/lib/docker/overlay2/4b42718e588e48c6a44dd97f98bb830d297eb8995ed59933f921307f1da2803f/diff:/var/lib/docker/overlay2/e72c33bb852ee68875a33b7bec813305a6b91f8b16ae32db22762cf43402323b/diff:/var/lib/docker/overlay2/8a99955944f9a0b68c5f113e61b6f6bc01bb3fd7f9c4a20ea12f00a88a33a1d4/diff:/var/lib/docker/overlay2/e0b0e841059ef79e6129bad0f0d8e18a1336a52c5467f7a05ca2794e8efcce2d/diff:/var/lib/docker/overlay2/a3fbb33b25e86980b42b0b45685f47a46023b703857d79cbb4c4d672ce639e39/diff:/var/lib/docker/overlay2/2dbe3b
e8eb01629a936e78c682f26882b187944fe5d24c049195654e490c802a/diff:/var/lib/docker/overlay2/c504395aedc09b4cd13feebc2043d4d0bcfab1b35c130806b4e9520c179b0231/diff:/var/lib/docker/overlay2/f333ac1dcf89b80f616501fd62797fbd7f8ecfb83f5fef081c7bb51ae911625d/diff:/var/lib/docker/overlay2/fb5c9b21669e5a9b084584933ae954fc9493d2e96daa25d19d7279da8cc2f52b/diff:/var/lib/docker/overlay2/af90405e66f7ffa61f79803e02798331195ec7594578c593fce0df6bfb9ba86c/diff:/var/lib/docker/overlay2/3c83186f707e3de251f810e96b25d5ab03a565e3d763f2605b2a762589e1e340/diff:/var/lib/docker/overlay2/37e178ca91bc815e59b4d08c255c2f134b1c800819cbe12cb2afa0e87379624c/diff:/var/lib/docker/overlay2/799d4146ec7c90cfddfab6c2610abdc1c7d41ee4bec84be82f7c9df0485d6390/diff:/var/lib/docker/overlay2/01936bf347c896d2075792750c427d32d5515aefdc4c8be60a70dd7a7c624e88/diff:/var/lib/docker/overlay2/58fd101e232f75bbf4159575ebc8bae8f27dbd7cb72659aa4d4d35385bbb3536/diff:/var/lib/docker/overlay2/eaadede4d4519ffc32dfe786221881f7d39ac8d5b7b9323f56508a90a0c52b29/diff:/var/lib/d
ocker/overlay2/0e2fed7ab7b98f63c8a787aa64d282e8001afa68ce1ce45be62168b53cd630c8/diff:/var/lib/docker/overlay2/f07d5613ff9c68f1a33650faf6224c6c0144b576c512a1211ec55360997eef5c/diff:/var/lib/docker/overlay2/254e8c42a01d4006c729fd67c19479b78041ca3abaa9f5c30b8a96e728a23732/diff:/var/lib/docker/overlay2/16eeb409b96071e187db369c3e8977b6807e5000a9b65c39d22530888a6f50b3/diff:/var/lib/docker/overlay2/32434435c4ce07daf39b43c678342ae7f62769a08740307e23f9e2c816b52714/diff:/var/lib/docker/overlay2/b507767acd4ce2a505273a8d30a25a000e198a7fe2321d1e75619467f87c982e/diff:/var/lib/docker/overlay2/89eb528b30472cbbf69cfd5c04fd59958f4bcf1106a7246c576b37103c1c29ea/diff:/var/lib/docker/overlay2/2fe626935915dbcc5d89b91e7aedb7e415c8c5f60a447d3bf29da7153c2e2d51/diff:/var/lib/docker/overlay2/12e2e6c023d453521828bd672af514cfbfd23ed029fa49ad76bf06789bac9d82/diff:/var/lib/docker/overlay2/10893bc4db033fb9504bdfc0ce61a991a48be0ba3ce06487da02434390b992d6/diff:/var/lib/docker/overlay2/557d846a56175ff15f5fafe1a4e7488be2955f8362bb2bdfe69f36464f3
3450d/diff:/var/lib/docker/overlay2/037768a4494ebb110f1c274f3a38f986eb8131aa1059266fe2da896b01b49739/diff:/var/lib/docker/overlay2/d659cca8a2d2085353fce997d8c419c9c181ce1ea97f9a8e905c3f9529966fc1/diff:/var/lib/docker/overlay2/9d6fbc388597a7a6d8f4f89812b20cc2dca57eba35dfd4c86723cf513c5bc37d/diff:/var/lib/docker/overlay2/1fb8a6e1e3555d3f1437c69ded87ac2ef056b8a5ec422146c07c694478c4b005/diff:/var/lib/docker/overlay2/fb0364b23eadc6eeadc7f5bf8ef08c906adcd94c9b2b1725e6e2352f4c9dcf50/diff:/var/lib/docker/overlay2/b4535ed62cf27bc04fe79b87d2d35f5d0151c3d95343f6cacc95a945de87c736/diff:/var/lib/docker/overlay2/07c066adfccd26b1b3982b81b6d662d47058772375f0b3623a4644d5fa9dacbb/diff:/var/lib/docker/overlay2/17fde45fbe3450cac98412542274d7b0906726ad3228a23912e31a0cca96a610/diff:/var/lib/docker/overlay2/9f923d8bd4daeab1de35589fa5d37738ce7f9b42d2e37d6cbb9a37058aeb63ec/diff:/var/lib/docker/overlay2/4cf5d2f7a3bfbed0d8f8632fce96b6b105c27eae1b84e7afb03e51f1325654b0/diff:/var/lib/docker/overlay2/2fc58532ce127557e21e34263872706f550748
939bbe53ba13cc9c6f8db039fd/diff:/var/lib/docker/overlay2/cfde536f5c21d7e98d79b854c716cdf5fad89d16d96526334ff303d0382952bc/diff:/var/lib/docker/overlay2/7ea9a21ee484f34b47c36a3279f32faadb0cb1fe47024a0db2169fba9890c080/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da1298bdc7c690d976cddf11ec06c53f3c0498e2fa7dca8218cb9dd123e574fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-720000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-720000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-720000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-720000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e4e73c310713a434f67e331eb1506dcd28ad63819145d6730622dc2dba50031a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55385"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55386"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55387"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55388"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e4e73c310713",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-720000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7a7d076a4985",
	                        "old-k8s-version-720000"
	                    ],
	                    "NetworkID": "4a101da36ff964d86adf1945f3a9a22581d700864206dd1558c9c4957ae7df32",
	                    "EndpointID": "aeeb4aec150b1b8c52d1657c0698d8a1137e548645886cdfac6e0cdeda3a86c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 2 (413.45953ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-720000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-720000 logs -n 25: (3.455931773s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-216000                                | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p embed-certs-216000                                | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-216000                                | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	| delete  | -p embed-certs-216000                                | embed-certs-216000           | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	| delete  | -p                                                   | disable-driver-mounts-412000 | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:47 PST |
	|         | disable-driver-mounts-412000                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:47 PST | 27 Jan 23 20:48 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:48 PST | 27 Jan 23 20:48 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:48 PST | 27 Jan 23 20:48 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-500000     | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:48 PST | 27 Jan 23 20:48 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:48 PST | 27 Jan 23 20:53 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |         |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:54 PST | 27 Jan 23 20:54 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |         |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:54 PST | 27 Jan 23 20:54 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:54 PST | 27 Jan 23 20:54 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:54 PST | 27 Jan 23 20:54 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-500000 | jenkins | v1.28.0 | 27 Jan 23 20:54 PST | 27 Jan 23 20:54 PST |
	|         | default-k8s-diff-port-500000                         |                              |         |         |                     |                     |
	| start   | -p newest-cni-686000 --memory=2200 --alsologtostderr | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:54 PST | 27 Jan 23 20:54 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-686000           | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:54 PST | 27 Jan 23 20:54 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |         |                     |                     |
	| stop    | -p newest-cni-686000                                 | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:54 PST | 27 Jan 23 20:55 PST |
	|         | --alsologtostderr -v=3                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-686000                | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:55 PST | 27 Jan 23 20:55 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |         |                     |                     |
	| start   | -p newest-cni-686000 --memory=2200 --alsologtostderr | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:55 PST | 27 Jan 23 20:55 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-686000 sudo                            | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:55 PST | 27 Jan 23 20:55 PST |
	|         | crictl images -o json                                |                              |         |         |                     |                     |
	| pause   | -p newest-cni-686000                                 | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:55 PST | 27 Jan 23 20:55 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| unpause | -p newest-cni-686000                                 | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:55 PST | 27 Jan 23 20:55 PST |
	|         | --alsologtostderr -v=1                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-686000                                 | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:55 PST | 27 Jan 23 20:55 PST |
	| delete  | -p newest-cni-686000                                 | newest-cni-686000            | jenkins | v1.28.0 | 27 Jan 23 20:55 PST | 27 Jan 23 20:55 PST |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 20:55:07
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 20:55:07.823225   25888 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:55:07.823399   25888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:55:07.823404   25888 out.go:309] Setting ErrFile to fd 2...
	I0127 20:55:07.823408   25888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:55:07.823516   25888 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 20:55:07.824007   25888 out.go:303] Setting JSON to false
	I0127 20:55:07.842796   25888 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6881,"bootTime":1674874826,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 20:55:07.842894   25888 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:55:07.864887   25888 out.go:177] * [newest-cni-686000] minikube v1.28.0 on Darwin 13.2
	I0127 20:55:07.907400   25888 notify.go:220] Checking for updates...
	I0127 20:55:07.929416   25888 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:55:07.951248   25888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:55:07.993209   25888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:55:08.014158   25888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:55:08.035407   25888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 20:55:08.056432   25888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:55:08.078847   25888 config.go:180] Loaded profile config "newest-cni-686000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:55:08.079515   25888 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:55:08.140675   25888 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 20:55:08.140801   25888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:55:08.293430   25888 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 04:55:08.192743662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:55:08.336183   25888 out.go:177] * Using the docker driver based on existing profile
	I0127 20:55:08.357067   25888 start.go:296] selected driver: docker
	I0127 20:55:08.357105   25888 start.go:840] validating driver "docker" against &{Name:newest-cni-686000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-686000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: S
ubnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:55:08.357279   25888 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:55:08.361113   25888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 20:55:08.508186   25888 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 04:55:08.413864468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 20:55:08.508340   25888 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0127 20:55:08.508362   25888 cni.go:84] Creating CNI manager for ""
	I0127 20:55:08.508373   25888 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:55:08.508387   25888 start_flags.go:319] config:
	{Name:newest-cni-686000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-686000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:55:08.530652   25888 out.go:177] * Starting control plane node newest-cni-686000 in cluster newest-cni-686000
	I0127 20:55:08.552448   25888 cache.go:120] Beginning downloading kic base image for docker with docker
	I0127 20:55:08.574052   25888 out.go:177] * Pulling base image ...
	I0127 20:55:08.616497   25888 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:55:08.616497   25888 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0127 20:55:08.616603   25888 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0127 20:55:08.616623   25888 cache.go:57] Caching tarball of preloaded images
	I0127 20:55:08.616827   25888 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:55:08.616847   25888 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0127 20:55:08.617878   25888 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/config.json ...
	I0127 20:55:08.673824   25888 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon, skipping pull
	I0127 20:55:08.673841   25888 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a exists in daemon, skipping load
	I0127 20:55:08.673866   25888 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:55:08.673901   25888 start.go:364] acquiring machines lock for newest-cni-686000: {Name:mkb2f4d50e074e7c5cd1879328c6f545d50bb005 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:55:08.673975   25888 start.go:368] acquired machines lock for "newest-cni-686000" in 57.566µs
	I0127 20:55:08.674000   25888 start.go:96] Skipping create...Using existing machine configuration
	I0127 20:55:08.674011   25888 fix.go:55] fixHost starting: 
	I0127 20:55:08.674238   25888 cli_runner.go:164] Run: docker container inspect newest-cni-686000 --format={{.State.Status}}
	I0127 20:55:08.732886   25888 fix.go:103] recreateIfNeeded on newest-cni-686000: state=Stopped err=<nil>
	W0127 20:55:08.732917   25888 fix.go:129] unexpected machine state, will restart: <nil>
	I0127 20:55:08.755001   25888 out.go:177] * Restarting existing docker container for "newest-cni-686000" ...
	I0127 20:55:08.776787   25888 cli_runner.go:164] Run: docker start newest-cni-686000
	I0127 20:55:09.119906   25888 cli_runner.go:164] Run: docker container inspect newest-cni-686000 --format={{.State.Status}}
	I0127 20:55:09.182608   25888 kic.go:426] container "newest-cni-686000" state is running.
	I0127 20:55:09.183208   25888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-686000
	I0127 20:55:09.246050   25888 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/config.json ...
	I0127 20:55:09.246550   25888 machine.go:88] provisioning docker machine ...
	I0127 20:55:09.246577   25888 ubuntu.go:169] provisioning hostname "newest-cni-686000"
	I0127 20:55:09.246679   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:09.320480   25888 main.go:141] libmachine: Using SSH client type: native
	I0127 20:55:09.321261   25888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56885 <nil> <nil>}
	I0127 20:55:09.321278   25888 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-686000 && echo "newest-cni-686000" | sudo tee /etc/hostname
	I0127 20:55:09.478099   25888 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-686000
	
	I0127 20:55:09.478191   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:09.540563   25888 main.go:141] libmachine: Using SSH client type: native
	I0127 20:55:09.540729   25888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56885 <nil> <nil>}
	I0127 20:55:09.540742   25888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-686000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-686000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-686000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 20:55:09.676985   25888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:55:09.677009   25888 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3092/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3092/.minikube}
	I0127 20:55:09.677029   25888 ubuntu.go:177] setting up certificates
	I0127 20:55:09.677038   25888 provision.go:83] configureAuth start
	I0127 20:55:09.677112   25888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-686000
	I0127 20:55:09.739123   25888 provision.go:138] copyHostCerts
	I0127 20:55:09.739226   25888 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem, removing ...
	I0127 20:55:09.739235   25888 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem
	I0127 20:55:09.739331   25888 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.pem (1078 bytes)
	I0127 20:55:09.739547   25888 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem, removing ...
	I0127 20:55:09.739555   25888 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem
	I0127 20:55:09.739615   25888 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/cert.pem (1123 bytes)
	I0127 20:55:09.739770   25888 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem, removing ...
	I0127 20:55:09.739778   25888 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem
	I0127 20:55:09.739838   25888 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3092/.minikube/key.pem (1679 bytes)
	I0127 20:55:09.739973   25888 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem org=jenkins.newest-cni-686000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-686000]
	I0127 20:55:09.957173   25888 provision.go:172] copyRemoteCerts
	I0127 20:55:09.957241   25888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 20:55:09.957293   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:10.017631   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:10.111336   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 20:55:10.128909   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0127 20:55:10.147119   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 20:55:10.164887   25888 provision.go:86] duration metric: configureAuth took 487.834684ms
	I0127 20:55:10.164901   25888 ubuntu.go:193] setting minikube options for container-runtime
	I0127 20:55:10.165065   25888 config.go:180] Loaded profile config "newest-cni-686000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:55:10.165131   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:10.226563   25888 main.go:141] libmachine: Using SSH client type: native
	I0127 20:55:10.226726   25888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56885 <nil> <nil>}
	I0127 20:55:10.226735   25888 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 20:55:10.366939   25888 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0127 20:55:10.366956   25888 ubuntu.go:71] root file system type: overlay
	I0127 20:55:10.367146   25888 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 20:55:10.367233   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:10.428247   25888 main.go:141] libmachine: Using SSH client type: native
	I0127 20:55:10.428430   25888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56885 <nil> <nil>}
	I0127 20:55:10.428480   25888 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 20:55:10.573767   25888 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 20:55:10.573876   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:10.634348   25888 main.go:141] libmachine: Using SSH client type: native
	I0127 20:55:10.634504   25888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 56885 <nil> <nil>}
	I0127 20:55:10.634519   25888 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 20:55:10.774984   25888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:55:10.774998   25888 machine.go:91] provisioned docker machine in 1.528433173s
	I0127 20:55:10.775008   25888 start.go:300] post-start starting for "newest-cni-686000" (driver="docker")
	I0127 20:55:10.775019   25888 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 20:55:10.775094   25888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 20:55:10.775152   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:10.834418   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:10.930130   25888 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 20:55:10.933769   25888 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0127 20:55:10.933790   25888 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0127 20:55:10.933800   25888 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0127 20:55:10.933804   25888 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0127 20:55:10.933814   25888 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/addons for local assets ...
	I0127 20:55:10.933908   25888 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3092/.minikube/files for local assets ...
	I0127 20:55:10.934077   25888 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem -> 44062.pem in /etc/ssl/certs
	I0127 20:55:10.934255   25888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 20:55:10.941547   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /etc/ssl/certs/44062.pem (1708 bytes)
	I0127 20:55:10.959332   25888 start.go:303] post-start completed in 184.30997ms
	I0127 20:55:10.959410   25888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 20:55:10.959468   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:11.018640   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:11.110912   25888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0127 20:55:11.115594   25888 fix.go:57] fixHost completed within 2.441570956s
	I0127 20:55:11.115608   25888 start.go:83] releasing machines lock for "newest-cni-686000", held for 2.441615825s
	I0127 20:55:11.115706   25888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-686000
	I0127 20:55:11.176109   25888 ssh_runner.go:195] Run: cat /version.json
	I0127 20:55:11.176134   25888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 20:55:11.176176   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:11.176203   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:11.240130   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:11.240280   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:11.391215   25888 ssh_runner.go:195] Run: systemctl --version
	I0127 20:55:11.396139   25888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 20:55:11.401401   25888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0127 20:55:11.418027   25888 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0127 20:55:11.418157   25888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 20:55:11.426251   25888 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0127 20:55:11.439544   25888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 20:55:11.448152   25888 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 20:55:11.448191   25888 start.go:472] detecting cgroup driver to use...
	I0127 20:55:11.448207   25888 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 20:55:11.448321   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:55:11.462087   25888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0127 20:55:11.470801   25888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 20:55:11.479412   25888 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 20:55:11.479476   25888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 20:55:11.488205   25888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:55:11.496687   25888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 20:55:11.505238   25888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:55:11.514016   25888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 20:55:11.522179   25888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 20:55:11.531127   25888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 20:55:11.538669   25888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 20:55:11.546108   25888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:55:11.613566   25888 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 20:55:11.689485   25888 start.go:472] detecting cgroup driver to use...
	I0127 20:55:11.689504   25888 detect.go:158] detected "cgroupfs" cgroup driver on host os
	I0127 20:55:11.689574   25888 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 20:55:11.701410   25888 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0127 20:55:11.701493   25888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:55:11.712199   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:55:11.727925   25888 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 20:55:11.812141   25888 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 20:55:11.922600   25888 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 20:55:11.922623   25888 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 20:55:11.937136   25888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:55:12.044056   25888 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:55:12.305753   25888 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 20:55:12.376107   25888 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 20:55:12.449358   25888 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 20:55:12.522455   25888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:55:12.591322   25888 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 20:55:12.608163   25888 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 20:55:12.608251   25888 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 20:55:12.612432   25888 start.go:540] Will wait 60s for crictl version
	I0127 20:55:12.612475   25888 ssh_runner.go:195] Run: which crictl
	I0127 20:55:12.616376   25888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 20:55:12.734882   25888 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.22
	RuntimeApiVersion:  v1alpha2
	I0127 20:55:12.734974   25888 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:55:12.764653   25888 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:55:12.842453   25888 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.22 ...
	I0127 20:55:12.842596   25888 cli_runner.go:164] Run: docker exec -t newest-cni-686000 dig +short host.docker.internal
	I0127 20:55:12.966799   25888 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0127 20:55:12.966911   25888 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0127 20:55:12.971390   25888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:55:12.981523   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:13.062680   25888 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0127 20:55:13.084515   25888 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:55:13.084674   25888 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:55:13.111176   25888 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 20:55:13.111196   25888 docker.go:560] Images already preloaded, skipping extraction
	I0127 20:55:13.111278   25888 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:55:13.136611   25888 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 20:55:13.136627   25888 cache_images.go:84] Images are preloaded, skipping loading
	I0127 20:55:13.136711   25888 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 20:55:13.208408   25888 cni.go:84] Creating CNI manager for ""
	I0127 20:55:13.208427   25888 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:55:13.208450   25888 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0127 20:55:13.208468   25888 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-686000 NodeName:newest-cni-686000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0127 20:55:13.208639   25888 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-686000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 20:55:13.208722   25888 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-686000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-686000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 20:55:13.208792   25888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0127 20:55:13.216990   25888 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 20:55:13.217055   25888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 20:55:13.224531   25888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0127 20:55:13.238715   25888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 20:55:13.252832   25888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0127 20:55:13.266202   25888 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0127 20:55:13.270251   25888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:55:13.280510   25888 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000 for IP: 192.168.67.2
	I0127 20:55:13.280546   25888 certs.go:186] acquiring lock for shared ca certs: {Name:mk2d86ad31f10478b3fe72eedd54ef2fcd74cf4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:55:13.280735   25888 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key
	I0127 20:55:13.280821   25888 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key
	I0127 20:55:13.280942   25888 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/client.key
	I0127 20:55:13.281049   25888 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/apiserver.key.c7fa3a9e
	I0127 20:55:13.281151   25888 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/proxy-client.key
	I0127 20:55:13.281400   25888 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem (1338 bytes)
	W0127 20:55:13.281437   25888 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406_empty.pem, impossibly tiny 0 bytes
	I0127 20:55:13.281447   25888 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 20:55:13.281485   25888 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/ca.pem (1078 bytes)
	I0127 20:55:13.281523   25888 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/cert.pem (1123 bytes)
	I0127 20:55:13.281556   25888 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/certs/key.pem (1679 bytes)
	I0127 20:55:13.281625   25888 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem (1708 bytes)
	I0127 20:55:13.282211   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 20:55:13.300601   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 20:55:13.318778   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 20:55:13.336787   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/newest-cni-686000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 20:55:13.354559   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 20:55:13.372217   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 20:55:13.390471   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 20:55:13.408744   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 20:55:13.427205   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 20:55:13.445910   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/certs/4406.pem --> /usr/share/ca-certificates/4406.pem (1338 bytes)
	I0127 20:55:13.464426   25888 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/ssl/certs/44062.pem --> /usr/share/ca-certificates/44062.pem (1708 bytes)
	I0127 20:55:13.483195   25888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 20:55:13.498386   25888 ssh_runner.go:195] Run: openssl version
	I0127 20:55:13.504507   25888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4406.pem && ln -fs /usr/share/ca-certificates/4406.pem /etc/ssl/certs/4406.pem"
	I0127 20:55:13.514485   25888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4406.pem
	I0127 20:55:13.518826   25888 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:36 /usr/share/ca-certificates/4406.pem
	I0127 20:55:13.518895   25888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4406.pem
	I0127 20:55:13.524969   25888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4406.pem /etc/ssl/certs/51391683.0"
	I0127 20:55:13.533803   25888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44062.pem && ln -fs /usr/share/ca-certificates/44062.pem /etc/ssl/certs/44062.pem"
	I0127 20:55:13.543128   25888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44062.pem
	I0127 20:55:13.547836   25888 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:36 /usr/share/ca-certificates/44062.pem
	I0127 20:55:13.547898   25888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44062.pem
	I0127 20:55:13.553916   25888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44062.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 20:55:13.561889   25888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 20:55:13.570931   25888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:55:13.576119   25888 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:31 /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:55:13.576200   25888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:55:13.582771   25888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 20:55:13.590838   25888 kubeadm.go:401] StartCluster: {Name:newest-cni-686000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-686000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:55:13.590959   25888 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 20:55:13.615920   25888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 20:55:13.624025   25888 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0127 20:55:13.624048   25888 kubeadm.go:633] restartCluster start
	I0127 20:55:13.624100   25888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 20:55:13.631322   25888 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:13.631444   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:13.691902   25888 kubeconfig.go:135] verify returned: extract IP: "newest-cni-686000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:55:13.692069   25888 kubeconfig.go:146] "newest-cni-686000" context is missing from /Users/jenkins/minikube-integration/15565-3092/kubeconfig - will repair!
	I0127 20:55:13.692390   25888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/kubeconfig: {Name:mkdfca390fbcfbb59336162afe07d375994efabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:55:13.693755   25888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 20:55:13.701879   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:13.701943   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:13.711103   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:14.211254   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:14.211370   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:14.222233   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:14.713220   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:14.713383   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:14.724579   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:15.211680   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:15.211872   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:15.222967   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:15.713250   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:15.713527   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:15.724623   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:16.211423   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:16.211635   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:16.222515   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:16.711276   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:16.711412   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:16.722560   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:17.212615   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:17.212747   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:17.223864   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:17.712358   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:17.712592   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:17.724218   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:18.213260   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:18.213425   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:18.224792   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:18.711259   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:18.711343   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:18.721287   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:19.213262   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:19.213422   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:19.224807   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:19.711994   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:19.712209   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:19.723433   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:20.213267   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:20.213418   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:20.224471   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:20.712104   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:20.712215   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:20.723080   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:21.211227   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:21.211297   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:21.220843   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:21.711660   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:21.711776   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:21.722885   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:22.213308   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:22.213527   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:22.224512   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:22.712141   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:22.712322   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:22.723420   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:23.212371   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:23.212521   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:23.223676   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:23.711311   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:23.711481   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:23.721280   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:23.721291   25888 api_server.go:165] Checking apiserver status ...
	I0127 20:55:23.721339   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0127 20:55:23.729753   25888 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:23.729765   25888 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0127 20:55:23.729770   25888 kubeadm.go:1120] stopping kube-system containers ...
	I0127 20:55:23.729834   25888 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 20:55:23.754559   25888 docker.go:456] Stopping containers: [7526168e2cd8 2d28d026133b 4f03dd2c33c8 3c73a438bac5 dde1735157b3 e8f2376cd19e 91ef2cd6c2be 5ff3cdfe0145 80d918d3c752 f886af50c606 b6e9da592a29 f6ec8082f00b 3e7e9f3a99e4 10536732fa60 1dc2a4b940e4 faf742dab90e ddcd6963b8e0]
	I0127 20:55:23.754642   25888 ssh_runner.go:195] Run: docker stop 7526168e2cd8 2d28d026133b 4f03dd2c33c8 3c73a438bac5 dde1735157b3 e8f2376cd19e 91ef2cd6c2be 5ff3cdfe0145 80d918d3c752 f886af50c606 b6e9da592a29 f6ec8082f00b 3e7e9f3a99e4 10536732fa60 1dc2a4b940e4 faf742dab90e ddcd6963b8e0
	I0127 20:55:23.779198   25888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 20:55:23.790004   25888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:55:23.797884   25888 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 28 04:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 28 04:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan 28 04:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 28 04:54 /etc/kubernetes/scheduler.conf
	
	I0127 20:55:23.797942   25888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 20:55:23.805708   25888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 20:55:23.813323   25888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 20:55:23.820665   25888 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:23.820755   25888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 20:55:23.827902   25888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 20:55:23.835539   25888 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0127 20:55:23.835591   25888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 20:55:23.842754   25888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 20:55:23.850362   25888 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0127 20:55:23.850378   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:55:23.907490   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:55:24.538424   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:55:24.676274   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:55:24.735353   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:55:24.870312   25888 api_server.go:51] waiting for apiserver process to appear ...
	I0127 20:55:24.870404   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:55:25.382488   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:55:25.882663   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:55:25.899105   25888 api_server.go:71] duration metric: took 1.028793854s to wait for apiserver process to appear ...
	I0127 20:55:25.899126   25888 api_server.go:87] waiting for apiserver healthz status ...
	I0127 20:55:25.899149   25888 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56889/healthz ...
	I0127 20:55:25.900603   25888 api_server.go:268] stopped: https://127.0.0.1:56889/healthz: Get "https://127.0.0.1:56889/healthz": EOF
	I0127 20:55:26.401410   25888 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56889/healthz ...
	I0127 20:55:28.674802   25888 api_server.go:278] https://127.0.0.1:56889/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 20:55:28.674824   25888 api_server.go:102] status: https://127.0.0.1:56889/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 20:55:28.901558   25888 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56889/healthz ...
	I0127 20:55:28.908204   25888 api_server.go:278] https://127.0.0.1:56889/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 20:55:28.908219   25888 api_server.go:102] status: https://127.0.0.1:56889/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 20:55:29.401609   25888 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56889/healthz ...
	I0127 20:55:29.406698   25888 api_server.go:278] https://127.0.0.1:56889/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 20:55:29.406712   25888 api_server.go:102] status: https://127.0.0.1:56889/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 20:55:29.900752   25888 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56889/healthz ...
	I0127 20:55:29.905708   25888 api_server.go:278] https://127.0.0.1:56889/healthz returned 200:
	ok
	I0127 20:55:29.912472   25888 api_server.go:140] control plane version: v1.26.1
	I0127 20:55:29.912492   25888 api_server.go:130] duration metric: took 4.013340714s to wait for apiserver health ...
	I0127 20:55:29.912503   25888 cni.go:84] Creating CNI manager for ""
	I0127 20:55:29.912512   25888 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:55:29.934621   25888 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 20:55:29.956335   25888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 20:55:29.966670   25888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0127 20:55:29.980156   25888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 20:55:29.989532   25888 system_pods.go:59] 9 kube-system pods found
	I0127 20:55:29.989584   25888 system_pods.go:61] "coredns-787d4945fb-l77dx" [ef12ed9e-b663-4f9f-ba65-0443ab4f6b25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 20:55:29.989589   25888 system_pods.go:61] "coredns-787d4945fb-r7qmz" [33eeb817-a415-4a88-955c-3acf1598e48a] Running
	I0127 20:55:29.989593   25888 system_pods.go:61] "etcd-newest-cni-686000" [4567d674-8305-44c4-af5d-0ec67cbf935b] Running
	I0127 20:55:29.989596   25888 system_pods.go:61] "kube-apiserver-newest-cni-686000" [12796751-c95b-4061-bca2-e504fa3ee279] Running
	I0127 20:55:29.989604   25888 system_pods.go:61] "kube-controller-manager-newest-cni-686000" [531a136c-b3d5-4ddc-b4c9-01261cb6a8b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 20:55:29.989608   25888 system_pods.go:61] "kube-proxy-c2drc" [a22c8193-5c41-4dcc-b2e0-68e276b48f95] Running
	I0127 20:55:29.989613   25888 system_pods.go:61] "kube-scheduler-newest-cni-686000" [b42593d3-3881-414f-8761-4cc707e20de8] Running
	I0127 20:55:29.989618   25888 system_pods.go:61] "metrics-server-7997d45854-j2cdl" [cac2dbcf-98cc-4136-b5ed-369cecc7eff9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 20:55:29.989624   25888 system_pods.go:61] "storage-provisioner" [e658591e-713a-4a6f-b0a2-1a4326a1ad67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 20:55:29.989631   25888 system_pods.go:74] duration metric: took 9.462569ms to wait for pod list to return data ...
	I0127 20:55:29.989639   25888 node_conditions.go:102] verifying NodePressure condition ...
	I0127 20:55:29.993610   25888 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0127 20:55:29.993625   25888 node_conditions.go:123] node cpu capacity is 6
	I0127 20:55:29.993676   25888 node_conditions.go:105] duration metric: took 4.006932ms to run NodePressure ...
	I0127 20:55:29.993690   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:55:30.190080   25888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 20:55:30.200809   25888 ops.go:34] apiserver oom_adj: -16
	I0127 20:55:30.200825   25888 kubeadm.go:637] restartCluster took 16.576703697s
	I0127 20:55:30.200835   25888 kubeadm.go:403] StartCluster complete in 16.609934076s
	I0127 20:55:30.200856   25888 settings.go:142] acquiring lock: {Name:mk92099370375c5a2a7c1c2d1ac11f51c379e71f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:55:30.200956   25888 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 20:55:30.201655   25888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/kubeconfig: {Name:mkdfca390fbcfbb59336162afe07d375994efabb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:55:30.201985   25888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 20:55:30.202024   25888 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0127 20:55:30.202103   25888 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-686000"
	I0127 20:55:30.202120   25888 addons.go:65] Setting default-storageclass=true in profile "newest-cni-686000"
	I0127 20:55:30.202124   25888 addons.go:65] Setting metrics-server=true in profile "newest-cni-686000"
	I0127 20:55:30.202142   25888 addons.go:227] Setting addon metrics-server=true in "newest-cni-686000"
	I0127 20:55:30.202144   25888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-686000"
	W0127 20:55:30.202149   25888 addons.go:236] addon metrics-server should already be in state true
	I0127 20:55:30.202144   25888 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-686000"
	W0127 20:55:30.202161   25888 addons.go:236] addon storage-provisioner should already be in state true
	I0127 20:55:30.202205   25888 addons.go:65] Setting dashboard=true in profile "newest-cni-686000"
	I0127 20:55:30.202221   25888 addons.go:227] Setting addon dashboard=true in "newest-cni-686000"
	I0127 20:55:30.202223   25888 host.go:66] Checking if "newest-cni-686000" exists ...
	W0127 20:55:30.202229   25888 addons.go:236] addon dashboard should already be in state true
	I0127 20:55:30.202251   25888 host.go:66] Checking if "newest-cni-686000" exists ...
	I0127 20:55:30.202311   25888 host.go:66] Checking if "newest-cni-686000" exists ...
	I0127 20:55:30.202329   25888 config.go:180] Loaded profile config "newest-cni-686000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:55:30.202549   25888 cli_runner.go:164] Run: docker container inspect newest-cni-686000 --format={{.State.Status}}
	I0127 20:55:30.202733   25888 cli_runner.go:164] Run: docker container inspect newest-cni-686000 --format={{.State.Status}}
	I0127 20:55:30.203570   25888 cli_runner.go:164] Run: docker container inspect newest-cni-686000 --format={{.State.Status}}
	I0127 20:55:30.206088   25888 cli_runner.go:164] Run: docker container inspect newest-cni-686000 --format={{.State.Status}}
	I0127 20:55:30.212302   25888 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-686000" context rescaled to 1 replicas
	I0127 20:55:30.212344   25888 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:55:30.234277   25888 out.go:177] * Verifying Kubernetes components...
	I0127 20:55:30.276168   25888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:55:30.374046   25888 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0127 20:55:30.315262   25888 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0127 20:55:30.336380   25888 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 20:55:30.376215   25888 addons.go:227] Setting addon default-storageclass=true in "newest-cni-686000"
	I0127 20:55:30.408612   25888 start.go:881] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0127 20:55:30.408682   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:30.432533   25888 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	W0127 20:55:30.470134   25888 addons.go:236] addon default-storageclass should already be in state true
	I0127 20:55:30.491372   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 20:55:30.470180   25888 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0127 20:55:30.491437   25888 host.go:66] Checking if "newest-cni-686000" exists ...
	I0127 20:55:30.491470   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0127 20:55:30.491221   25888 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0127 20:55:30.491539   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:30.492706   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:30.494189   25888 cli_runner.go:164] Run: docker container inspect newest-cni-686000 --format={{.State.Status}}
	I0127 20:55:30.529307   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0127 20:55:30.529336   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0127 20:55:30.529955   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:30.569083   25888 api_server.go:51] waiting for apiserver process to appear ...
	I0127 20:55:30.569223   25888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:55:30.590961   25888 api_server.go:71] duration metric: took 378.566752ms to wait for apiserver process to appear ...
	I0127 20:55:30.590982   25888 api_server.go:87] waiting for apiserver healthz status ...
	I0127 20:55:30.590997   25888 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:56889/healthz ...
	I0127 20:55:30.598769   25888 api_server.go:278] https://127.0.0.1:56889/healthz returned 200:
	ok
	I0127 20:55:30.601359   25888 api_server.go:140] control plane version: v1.26.1
	I0127 20:55:30.601380   25888 api_server.go:130] duration metric: took 10.391023ms to wait for apiserver health ...
	I0127 20:55:30.601389   25888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 20:55:30.611590   25888 system_pods.go:59] 9 kube-system pods found
	I0127 20:55:30.611632   25888 system_pods.go:61] "coredns-787d4945fb-l77dx" [ef12ed9e-b663-4f9f-ba65-0443ab4f6b25] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 20:55:30.611645   25888 system_pods.go:61] "coredns-787d4945fb-r7qmz" [33eeb817-a415-4a88-955c-3acf1598e48a] Running
	I0127 20:55:30.611653   25888 system_pods.go:61] "etcd-newest-cni-686000" [4567d674-8305-44c4-af5d-0ec67cbf935b] Running
	I0127 20:55:30.611660   25888 system_pods.go:61] "kube-apiserver-newest-cni-686000" [12796751-c95b-4061-bca2-e504fa3ee279] Running
	I0127 20:55:30.611681   25888 system_pods.go:61] "kube-controller-manager-newest-cni-686000" [531a136c-b3d5-4ddc-b4c9-01261cb6a8b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 20:55:30.611698   25888 system_pods.go:61] "kube-proxy-c2drc" [a22c8193-5c41-4dcc-b2e0-68e276b48f95] Running
	I0127 20:55:30.611706   25888 system_pods.go:61] "kube-scheduler-newest-cni-686000" [b42593d3-3881-414f-8761-4cc707e20de8] Running
	I0127 20:55:30.611716   25888 system_pods.go:61] "metrics-server-7997d45854-j2cdl" [cac2dbcf-98cc-4136-b5ed-369cecc7eff9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0127 20:55:30.611726   25888 system_pods.go:61] "storage-provisioner" [e658591e-713a-4a6f-b0a2-1a4326a1ad67] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 20:55:30.611733   25888 system_pods.go:74] duration metric: took 10.337889ms to wait for pod list to return data ...
	I0127 20:55:30.611745   25888 default_sa.go:34] waiting for default service account to be created ...
	I0127 20:55:30.617435   25888 default_sa.go:45] found service account: "default"
	I0127 20:55:30.617453   25888 default_sa.go:55] duration metric: took 5.699083ms for default service account to be created ...
	I0127 20:55:30.617467   25888 kubeadm.go:578] duration metric: took 405.088398ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0127 20:55:30.617487   25888 node_conditions.go:102] verifying NodePressure condition ...
	I0127 20:55:30.631362   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:30.632466   25888 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 20:55:30.632509   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 20:55:30.632623   25888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-686000
	I0127 20:55:30.633734   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:30.636860   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:30.669825   25888 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0127 20:55:30.669846   25888 node_conditions.go:123] node cpu capacity is 6
	I0127 20:55:30.669857   25888 node_conditions.go:105] duration metric: took 52.364635ms to run NodePressure ...
	I0127 20:55:30.669867   25888 start.go:226] waiting for startup goroutines ...
	I0127 20:55:30.705805   25888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56885 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/newest-cni-686000/id_rsa Username:docker}
	I0127 20:55:30.881157   25888 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0127 20:55:30.881173   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0127 20:55:30.887079   25888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 20:55:30.892816   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0127 20:55:30.892829   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0127 20:55:30.979349   25888 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0127 20:55:30.979366   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0127 20:55:30.983110   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0127 20:55:30.983127   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0127 20:55:30.992803   25888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 20:55:31.006141   25888 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 20:55:31.006162   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0127 20:55:31.070187   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0127 20:55:31.070207   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0127 20:55:31.091371   25888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0127 20:55:31.102085   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0127 20:55:31.102103   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0127 20:55:31.194823   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0127 20:55:31.194838   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0127 20:55:31.277612   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0127 20:55:31.277633   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0127 20:55:31.297849   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0127 20:55:31.297868   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0127 20:55:31.383249   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0127 20:55:31.383267   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0127 20:55:31.405986   25888 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 20:55:31.405999   25888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0127 20:55:31.487396   25888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0127 20:55:32.409951   25888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.522844492s)
	I0127 20:55:32.409992   25888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.417159941s)
	I0127 20:55:32.410037   25888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.318643696s)
	I0127 20:55:32.410051   25888 addons.go:457] Verifying addon metrics-server=true in "newest-cni-686000"
	I0127 20:55:32.526489   25888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.039048323s)
	I0127 20:55:32.551578   25888 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-686000 addons enable metrics-server	
	
	
	I0127 20:55:32.593787   25888 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0127 20:55:32.651913   25888 addons.go:488] enableAddons completed in 2.449879952s
	I0127 20:55:32.652625   25888 ssh_runner.go:195] Run: rm -f paused
	I0127 20:55:32.694187   25888 start.go:538] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0127 20:55:32.717763   25888 out.go:177] * Done! kubectl is now configured to use "newest-cni-686000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 04:29:37 UTC, end at Sat 2023-01-28 04:56:41 UTC. --
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.390742681Z" level=info msg="Processing signal 'terminated'"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.391689237Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[440]: time="2023-01-28T04:29:40.391849511Z" level=info msg="Daemon shutdown complete"
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: docker.service: Succeeded.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.440920078Z" level=info msg="Starting up"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442625226Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442662033Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442682069Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.442698479Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444251405Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444290828Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444304767Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.444311848Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.450641942Z" level=info msg="Loading containers: start."
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.531616300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.563882138Z" level=info msg="Loading containers: done."
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.572136805Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=overlay2 version=20.10.22
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.572200080Z" level=info msg="Daemon has completed initialization"
	Jan 28 04:29:40 old-k8s-version-720000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.593651432Z" level=info msg="API listen on [::]:2376"
	Jan 28 04:29:40 old-k8s-version-720000 dockerd[627]: time="2023-01-28T04:29:40.600920617Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-28T04:56:43Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  04:56:44 up  1:55,  0 users,  load average: 0.60, 0.80, 0.82
	Linux old-k8s-version-720000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 04:29:37 UTC, end at Sat 2023-01-28 04:56:44 UTC. --
	Jan 28 04:56:42 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 04:56:42 old-k8s-version-720000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jan 28 04:56:42 old-k8s-version-720000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 04:56:42 old-k8s-version-720000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 04:56:42 old-k8s-version-720000 kubelet[34925]: I0128 04:56:42.815893   34925 server.go:410] Version: v1.16.0
	Jan 28 04:56:42 old-k8s-version-720000 kubelet[34925]: I0128 04:56:42.816259   34925 plugins.go:100] No cloud provider specified.
	Jan 28 04:56:42 old-k8s-version-720000 kubelet[34925]: I0128 04:56:42.816387   34925 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 04:56:42 old-k8s-version-720000 kubelet[34925]: I0128 04:56:42.818382   34925 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 04:56:42 old-k8s-version-720000 kubelet[34925]: W0128 04:56:42.819032   34925 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 04:56:42 old-k8s-version-720000 kubelet[34925]: W0128 04:56:42.819101   34925 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 04:56:42 old-k8s-version-720000 kubelet[34925]: F0128 04:56:42.819129   34925 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 04:56:42 old-k8s-version-720000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 04:56:42 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 04:56:43 old-k8s-version-720000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Jan 28 04:56:43 old-k8s-version-720000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 04:56:43 old-k8s-version-720000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 04:56:43 old-k8s-version-720000 kubelet[34938]: I0128 04:56:43.572226   34938 server.go:410] Version: v1.16.0
	Jan 28 04:56:43 old-k8s-version-720000 kubelet[34938]: I0128 04:56:43.572584   34938 plugins.go:100] No cloud provider specified.
	Jan 28 04:56:43 old-k8s-version-720000 kubelet[34938]: I0128 04:56:43.572608   34938 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 04:56:43 old-k8s-version-720000 kubelet[34938]: I0128 04:56:43.574569   34938 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 04:56:43 old-k8s-version-720000 kubelet[34938]: W0128 04:56:43.575310   34938 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 04:56:43 old-k8s-version-720000 kubelet[34938]: W0128 04:56:43.575379   34938 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 04:56:43 old-k8s-version-720000 kubelet[34938]: F0128 04:56:43.575405   34938 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 04:56:43 old-k8s-version-720000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 04:56:43 old-k8s-version-720000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:56:43.926401   26208 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 2 (410.181635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-720000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.80s)

                                                
                                    

Test pass (274/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 22.89
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.26.1/json-events 4.83
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 12.08
19 TestBinaryMirror 1.66
20 TestOffline 60.13
22 TestAddons/Setup 150.76
26 TestAddons/parallel/MetricsServer 5.69
27 TestAddons/parallel/HelmTiller 12.64
29 TestAddons/parallel/CSI 41.43
30 TestAddons/parallel/Headlamp 14.43
31 TestAddons/parallel/CloudSpanner 5.58
34 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/StoppedEnableDisable 11.63
36 TestCertOptions 38.65
37 TestCertExpiration 253.67
38 TestDockerFlags 39.67
39 TestForceSystemdFlag 39.69
40 TestForceSystemdEnv 39.72
42 TestHyperKitDriverInstallOrUpdate 5.72
45 TestErrorSpam/setup 33.29
46 TestErrorSpam/start 2.4
47 TestErrorSpam/status 1.32
48 TestErrorSpam/pause 1.87
49 TestErrorSpam/unpause 2.12
50 TestErrorSpam/stop 11.59
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 46.28
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 44.65
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.08
61 TestFunctional/serial/CacheCmd/cache/add_remote 7.52
62 TestFunctional/serial/CacheCmd/cache/add_local 1.72
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.09
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.73
67 TestFunctional/serial/CacheCmd/cache/delete 0.17
68 TestFunctional/serial/MinikubeKubectlCmd 0.57
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.71
70 TestFunctional/serial/ExtraConfig 42.47
71 TestFunctional/serial/ComponentHealth 0.06
72 TestFunctional/serial/LogsCmd 2.99
73 TestFunctional/serial/LogsFileCmd 3.17
75 TestFunctional/parallel/ConfigCmd 0.51
76 TestFunctional/parallel/DashboardCmd 13.89
77 TestFunctional/parallel/DryRun 1.54
78 TestFunctional/parallel/InternationalLanguage 0.73
79 TestFunctional/parallel/StatusCmd 1.3
82 TestFunctional/parallel/ServiceCmd 19.88
84 TestFunctional/parallel/AddonsCmd 0.27
85 TestFunctional/parallel/PersistentVolumeClaim 26.84
87 TestFunctional/parallel/SSHCmd 0.85
88 TestFunctional/parallel/CpCmd 2.18
89 TestFunctional/parallel/MySQL 25.15
90 TestFunctional/parallel/FileSync 0.46
91 TestFunctional/parallel/CertSync 2.86
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
99 TestFunctional/parallel/License 0.43
100 TestFunctional/parallel/Version/short 0.1
101 TestFunctional/parallel/Version/components 1.14
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.42
107 TestFunctional/parallel/ImageCommands/Setup 2.46
108 TestFunctional/parallel/DockerEnv/bash 2.06
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.86
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.46
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.36
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.71
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.23
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.28
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.93
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.76
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.15
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
130 TestFunctional/parallel/ProfileCmd/profile_list 0.52
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
132 TestFunctional/parallel/MountCmd/any-port 8.77
133 TestFunctional/parallel/MountCmd/specific-port 2.51
134 TestFunctional/delete_addon-resizer_images 0.16
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
140 TestImageBuild/serial/NormalBuild 2.2
141 TestImageBuild/serial/BuildWithBuildArg 0.94
142 TestImageBuild/serial/BuildWithDockerIgnore 0.48
143 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.43
153 TestJSONOutput/start/Command 47.35
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 0.67
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.62
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 5.86
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.77
178 TestKicCustomNetwork/create_custom_network 35.6
179 TestKicCustomNetwork/use_default_bridge_network 42.37
180 TestKicExistingNetwork 37.71
181 TestKicCustomSubnet 36.13
182 TestKicStaticIP 33.13
183 TestMainNoArgs 0.08
184 TestMinikubeProfile 70.1
187 TestMountStart/serial/StartWithMountFirst 8.03
188 TestMountStart/serial/VerifyMountFirst 0.41
189 TestMountStart/serial/StartWithMountSecond 8.15
190 TestMountStart/serial/VerifyMountSecond 0.41
191 TestMountStart/serial/DeleteFirst 2.15
192 TestMountStart/serial/VerifyMountPostDelete 0.41
193 TestMountStart/serial/Stop 1.59
194 TestMountStart/serial/RestartStopped 6.18
195 TestMountStart/serial/VerifyMountPostStop 0.42
198 TestMultiNode/serial/FreshStart2Nodes 82
199 TestMultiNode/serial/DeployApp2Nodes 10.84
200 TestMultiNode/serial/PingHostFrom2Pods 0.95
201 TestMultiNode/serial/AddNode 22.62
202 TestMultiNode/serial/ProfileList 0.48
203 TestMultiNode/serial/CopyFile 15.37
204 TestMultiNode/serial/StopNode 3.11
205 TestMultiNode/serial/StartAfterStop 10.63
206 TestMultiNode/serial/RestartKeepsNodes 88.55
207 TestMultiNode/serial/DeleteNode 6.36
208 TestMultiNode/serial/StopMultiNode 22.01
209 TestMultiNode/serial/RestartMultiNode 53.82
210 TestMultiNode/serial/ValidateNameConflict 37.52
214 TestPreload 123.97
216 TestScheduledStopUnix 108.38
217 TestSkaffold 67.56
219 TestInsufficientStorage 15.63
235 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 6.99
236 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 12.05
237 TestStoppedBinaryUpgrade/Setup 0.66
239 TestStoppedBinaryUpgrade/MinikubeLogs 3.59
241 TestPause/serial/Start 47.04
242 TestPause/serial/SecondStartNoReconfiguration 47.83
243 TestPause/serial/Pause 0.74
244 TestPause/serial/VerifyStatus 0.43
245 TestPause/serial/Unpause 0.69
246 TestPause/serial/PauseAgain 0.77
247 TestPause/serial/DeletePaused 2.68
248 TestPause/serial/VerifyDeletedResources 0.59
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
258 TestNoKubernetes/serial/StartWithK8s 33.23
259 TestNoKubernetes/serial/StartWithStopK8s 9.3
260 TestNoKubernetes/serial/Start 7.43
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
262 TestNoKubernetes/serial/ProfileList 30.67
263 TestNoKubernetes/serial/Stop 1.57
264 TestNoKubernetes/serial/StartNoArgs 4.98
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
266 TestNetworkPlugins/group/auto/Start 46.32
267 TestNetworkPlugins/group/auto/KubeletFlags 0.43
268 TestNetworkPlugins/group/auto/NetCatPod 15.19
269 TestNetworkPlugins/group/auto/DNS 0.15
270 TestNetworkPlugins/group/auto/Localhost 0.12
271 TestNetworkPlugins/group/auto/HairPin 0.13
272 TestNetworkPlugins/group/calico/Start 75.05
273 TestNetworkPlugins/group/calico/ControllerPod 5.02
274 TestNetworkPlugins/group/calico/KubeletFlags 0.43
275 TestNetworkPlugins/group/calico/NetCatPod 20.22
276 TestNetworkPlugins/group/calico/DNS 0.14
277 TestNetworkPlugins/group/calico/Localhost 0.13
278 TestNetworkPlugins/group/calico/HairPin 0.12
279 TestNetworkPlugins/group/custom-flannel/Start 64.73
280 TestNetworkPlugins/group/false/Start 48.33
281 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.49
282 TestNetworkPlugins/group/false/KubeletFlags 0.48
283 TestNetworkPlugins/group/custom-flannel/NetCatPod 20.23
284 TestNetworkPlugins/group/false/NetCatPod 16.22
285 TestNetworkPlugins/group/false/DNS 0.13
286 TestNetworkPlugins/group/false/Localhost 0.11
287 TestNetworkPlugins/group/false/HairPin 0.11
288 TestNetworkPlugins/group/custom-flannel/DNS 0.15
289 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
290 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
291 TestNetworkPlugins/group/kindnet/Start 58.44
292 TestNetworkPlugins/group/flannel/Start 50.99
293 TestNetworkPlugins/group/flannel/ControllerPod 5.02
294 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
295 TestNetworkPlugins/group/flannel/KubeletFlags 0.48
296 TestNetworkPlugins/group/flannel/NetCatPod 18.19
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
298 TestNetworkPlugins/group/kindnet/NetCatPod 18.23
299 TestNetworkPlugins/group/flannel/DNS 0.13
300 TestNetworkPlugins/group/flannel/Localhost 0.11
301 TestNetworkPlugins/group/flannel/HairPin 0.12
302 TestNetworkPlugins/group/kindnet/DNS 0.13
303 TestNetworkPlugins/group/kindnet/Localhost 0.12
304 TestNetworkPlugins/group/kindnet/HairPin 0.12
305 TestNetworkPlugins/group/enable-default-cni/Start 48.8
306 TestNetworkPlugins/group/bridge/Start 48.23
307 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.47
308 TestNetworkPlugins/group/bridge/KubeletFlags 0.45
309 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.32
310 TestNetworkPlugins/group/bridge/NetCatPod 14.28
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
313 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
314 TestNetworkPlugins/group/bridge/DNS 0.13
315 TestNetworkPlugins/group/bridge/Localhost 0.12
316 TestNetworkPlugins/group/bridge/HairPin 0.11
317 TestNetworkPlugins/group/kubenet/Start 50.57
320 TestNetworkPlugins/group/kubenet/KubeletFlags 0.42
321 TestNetworkPlugins/group/kubenet/NetCatPod 18.19
322 TestNetworkPlugins/group/kubenet/DNS 0.13
323 TestNetworkPlugins/group/kubenet/Localhost 0.12
324 TestNetworkPlugins/group/kubenet/HairPin 0.12
326 TestStartStop/group/no-preload/serial/FirstStart 56.87
327 TestStartStop/group/no-preload/serial/DeployApp 10.27
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.88
329 TestStartStop/group/no-preload/serial/Stop 11.03
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.4
331 TestStartStop/group/no-preload/serial/SecondStart 581.11
334 TestStartStop/group/old-k8s-version/serial/Stop 1.66
335 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
340 TestStartStop/group/no-preload/serial/Pause 3.35
342 TestStartStop/group/embed-certs/serial/FirstStart 49
343 TestStartStop/group/embed-certs/serial/DeployApp 9.27
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
345 TestStartStop/group/embed-certs/serial/Stop 11.01
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.54
347 TestStartStop/group/embed-certs/serial/SecondStart 556.43
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.46
352 TestStartStop/group/embed-certs/serial/Pause 3.51
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.73
356 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
358 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.92
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.46
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 304.47
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
362 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.45
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.39
366 TestStartStop/group/newest-cni/serial/FirstStart 44.64
367 TestStartStop/group/newest-cni/serial/DeployApp 0
368 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
369 TestStartStop/group/newest-cni/serial/Stop 10.99
370 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
371 TestStartStop/group/newest-cni/serial/SecondStart 25.47
372 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
375 TestStartStop/group/newest-cni/serial/Pause 3.33
x
+
TestDownloadOnly/v1.16.0/json-events (22.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-504000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-504000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (22.89126377s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (22.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-504000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-504000: exit status 85 (296.223842ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-504000 | jenkins | v1.28.0 | 27 Jan 23 19:30 PST |          |
	|         | -p download-only-504000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 19:30:11
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 19:30:11.089761    4408 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:30:11.089928    4408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:30:11.089934    4408 out.go:309] Setting ErrFile to fd 2...
	I0127 19:30:11.089938    4408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:30:11.090041    4408 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	W0127 19:30:11.090148    4408 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-3092/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-3092/.minikube/config/config.json: no such file or directory
	I0127 19:30:11.090847    4408 out.go:303] Setting JSON to true
	I0127 19:30:11.109377    4408 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1785,"bootTime":1674874826,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 19:30:11.109475    4408 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:30:11.131992    4408 out.go:97] [download-only-504000] minikube v1.28.0 on Darwin 13.2
	I0127 19:30:11.132149    4408 notify.go:220] Checking for updates...
	W0127 19:30:11.132191    4408 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 19:30:11.154235    4408 out.go:169] MINIKUBE_LOCATION=15565
	I0127 19:30:11.197929    4408 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 19:30:11.219233    4408 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:30:11.241450    4408 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:30:11.263027    4408 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	W0127 19:30:11.306242    4408 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 19:30:11.306633    4408 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 19:30:11.367108    4408 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 19:30:11.367216    4408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 19:30:11.513303    4408 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-28 03:30:11.41597208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 19:30:11.534962    4408 out.go:97] Using the docker driver based on user configuration
	I0127 19:30:11.535004    4408 start.go:296] selected driver: docker
	I0127 19:30:11.535016    4408 start.go:840] validating driver "docker" against <nil>
	I0127 19:30:11.535225    4408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 19:30:11.676313    4408 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:false NGoroutines:49 SystemTime:2023-01-28 03:30:11.58418656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 19:30:11.676429    4408 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0127 19:30:11.680665    4408 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0127 19:30:11.680768    4408 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 19:30:11.702130    4408 out.go:169] Using Docker Desktop driver with root privileges
	I0127 19:30:11.724200    4408 cni.go:84] Creating CNI manager for ""
	I0127 19:30:11.724239    4408 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 19:30:11.724259    4408 start_flags.go:319] config:
	{Name:download-only-504000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-504000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:30:11.745935    4408 out.go:97] Starting control plane node download-only-504000 in cluster download-only-504000
	I0127 19:30:11.746040    4408 cache.go:120] Beginning downloading kic base image for docker with docker
	I0127 19:30:11.768098    4408 out.go:97] Pulling base image ...
	I0127 19:30:11.768216    4408 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 19:30:11.768336    4408 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local docker daemon
	I0127 19:30:11.823359    4408 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a to local cache
	I0127 19:30:11.823604    4408 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a in local cache directory
	I0127 19:30:11.823726    4408 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a to local cache
	I0127 19:30:11.827778    4408 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0127 19:30:11.827791    4408 cache.go:57] Caching tarball of preloaded images
	I0127 19:30:11.827959    4408 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 19:30:11.849167    4408 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0127 19:30:11.849264    4408 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:30:11.936743    4408 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0127 19:30:14.785869    4408 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:30:14.786008    4408 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:30:15.331764    4408 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0127 19:30:15.331979    4408 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/download-only-504000/config.json ...
	I0127 19:30:15.332004    4408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/download-only-504000/config.json: {Name:mk2755c27275df4500b72d6f9f475029744705ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:30:15.332269    4408 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 19:30:15.332523    4408 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15565-3092/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-504000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (4.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-504000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-504000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (4.827326337s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (4.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-504000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-504000: exit status 85 (291.792757ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-504000 | jenkins | v1.28.0 | 27 Jan 23 19:30 PST |          |
	|         | -p download-only-504000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-504000 | jenkins | v1.28.0 | 27 Jan 23 19:30 PST |          |
	|         | -p download-only-504000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 19:30:34
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 19:30:34.279687    4464 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:30:34.279863    4464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:30:34.279868    4464 out.go:309] Setting ErrFile to fd 2...
	I0127 19:30:34.279872    4464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:30:34.279984    4464 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	W0127 19:30:34.280087    4464 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-3092/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-3092/.minikube/config/config.json: no such file or directory
	I0127 19:30:34.280447    4464 out.go:303] Setting JSON to true
	I0127 19:30:34.298771    4464 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1808,"bootTime":1674874826,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 19:30:34.298859    4464 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:30:34.321050    4464 out.go:97] [download-only-504000] minikube v1.28.0 on Darwin 13.2
	I0127 19:30:34.321192    4464 notify.go:220] Checking for updates...
	I0127 19:30:34.342601    4464 out.go:169] MINIKUBE_LOCATION=15565
	I0127 19:30:34.363856    4464 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 19:30:34.386090    4464 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:30:34.407898    4464 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:30:34.429998    4464 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-504000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-504000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (12.08s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-892000 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-892000 --force --alsologtostderr --driver=docker : (10.981002195s)
helpers_test.go:175: Cleaning up "download-docker-892000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-892000
--- PASS: TestDownloadOnlyKic (12.08s)

                                                
                                    
x
+
TestBinaryMirror (1.66s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-335000 --alsologtostderr --binary-mirror http://127.0.0.1:49473 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-335000 --alsologtostderr --binary-mirror http://127.0.0.1:49473 --driver=docker : (1.044728193s)
helpers_test.go:175: Cleaning up "binary-mirror-335000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-335000
--- PASS: TestBinaryMirror (1.66s)

                                                
                                    
x
+
TestOffline (60.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-362000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-362000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (57.377258399s)
helpers_test.go:175: Cleaning up "offline-docker-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-362000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-362000: (2.752057007s)
--- PASS: TestOffline (60.13s)

                                                
                                    
x
+
TestAddons/Setup (150.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-492000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-492000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.758617361s)
--- PASS: TestAddons/Setup (150.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.177264ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-75pb7" [fff80a11-832d-4373-a0c6-14a9e98a85b3] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012870563s
addons_test.go:380: (dbg) Run:  kubectl --context addons-492000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-492000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.69s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.64s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 6.225134ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-2v7h4" [e62c764f-46d6-446d-96a3-941a04918fff] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.010997733s
addons_test.go:438: (dbg) Run:  kubectl --context addons-492000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-492000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.094718113s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-492000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.662521ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [62f3b773-3b8d-4234-b736-d6b6ff2ea9f2] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [62f3b773-3b8d-4234-b736-d6b6ff2ea9f2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [62f3b773-3b8d-4234-b736-d6b6ff2ea9f2] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.009626948s
addons_test.go:549: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-492000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-492000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-492000 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-492000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-492000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-492000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7db5cd82-4bbe-4f18-a6e5-ac8447ac942b] Pending
helpers_test.go:344: "task-pv-pod-restore" [7db5cd82-4bbe-4f18-a6e5-ac8447ac942b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7db5cd82-4bbe-4f18-a6e5-ac8447ac942b] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.009625259s
addons_test.go:591: (dbg) Run:  kubectl --context addons-492000 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-492000 delete pod task-pv-pod-restore: (1.653989549s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-492000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-492000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-492000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-492000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.198840443s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-492000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.43s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-492000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-492000 --alsologtostderr -v=1: (2.369953261s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-85bhb" [5d7f5d16-4f80-428f-ad99-d4ff89ec0d6a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-85bhb" [5d7f5d16-4f80-428f-ad99-d4ff89ec0d6a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.055494344s
--- PASS: TestAddons/parallel/Headlamp (14.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:344: "cloud-spanner-emulator-5dcf58dbbb-64gld" [a726749b-d7bd-49f5-b676-593828a6f600] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009594127s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-492000
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-492000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-492000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.63s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-492000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-492000: (11.173963101s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-492000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-492000
--- PASS: TestAddons/StoppedEnableDisable (11.63s)

                                                
                                    
x
+
TestCertOptions (38.65s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-125000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-125000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (35.105223291s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-125000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-125000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-125000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-125000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-125000: (2.648274998s)
--- PASS: TestCertOptions (38.65s)

                                                
                                    
x
+
TestCertExpiration (253.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-664000 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-664000 --memory=2048 --cert-expiration=3m --driver=docker : (38.865977898s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-664000 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-664000 --memory=2048 --cert-expiration=8760h --driver=docker : (31.807219321s)
helpers_test.go:175: Cleaning up "cert-expiration-664000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-664000
E0127 20:11:01.466706    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-664000: (2.995132643s)
--- PASS: TestCertExpiration (253.67s)

                                                
                                    
x
+
TestDockerFlags (39.67s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-507000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-507000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (35.786743636s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-507000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-507000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-507000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-507000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-507000: (2.829215101s)
--- PASS: TestDockerFlags (39.67s)

                                                
                                    
x
+
TestForceSystemdFlag (39.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-610000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-610000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (36.361910569s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-610000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-610000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-610000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-610000: (2.819403546s)
--- PASS: TestForceSystemdFlag (39.69s)

                                                
                                    
x
+
TestForceSystemdEnv (39.72s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-906000 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-906000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (36.23909071s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-906000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-906000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-906000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-906000: (2.874083636s)
--- PASS: TestForceSystemdEnv (39.72s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (5.72s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (5.72s)

                                                
                                    
x
+
TestErrorSpam/setup (33.29s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-976000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-976000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 --driver=docker : (33.29426547s)
--- PASS: TestErrorSpam/setup (33.29s)

                                                
                                    
x
+
TestErrorSpam/start (2.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 start --dry-run
--- PASS: TestErrorSpam/start (2.40s)

                                                
                                    
x
+
TestErrorSpam/status (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 status
--- PASS: TestErrorSpam/status (1.32s)

                                                
                                    
x
+
TestErrorSpam/pause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 pause
--- PASS: TestErrorSpam/pause (1.87s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 unpause
--- PASS: TestErrorSpam/unpause (2.12s)

                                                
                                    
x
+
TestErrorSpam/stop (11.59s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 stop: (10.937785309s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-976000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-976000 stop
--- PASS: TestErrorSpam/stop (11.59s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15565-3092/.minikube/files/etc/test/nested/copy/4406/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-334000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-334000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (46.275973388s)
--- PASS: TestFunctional/serial/StartWithProxy (46.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-334000 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-334000 --alsologtostderr -v=8: (44.647406316s)
functional_test.go:656: soft start took 44.647958523s for "functional-334000" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-334000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 cache add k8s.gcr.io/pause:3.1: (2.562206456s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 cache add k8s.gcr.io/pause:3.3: (2.5312952s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 cache add k8s.gcr.io/pause:latest: (2.42764776s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-334000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local807360387/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 cache add minikube-local-cache-test:functional-334000
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 cache add minikube-local-cache-test:functional-334000: (1.161354149s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 cache delete minikube-local-cache-test:functional-334000
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-334000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-334000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (413.841545ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 cache reload: (1.44393905s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 kubectl -- --context functional-334000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.57s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-334000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.71s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-334000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 19:38:25.305976    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:25.312719    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:25.324913    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:25.345115    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:25.386013    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:25.466989    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:25.628787    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:25.950052    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:26.590381    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:27.872252    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:38:30.434395    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-334000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.470004471s)
functional_test.go:754: restart took 42.470131447s for "functional-334000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-334000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 logs: (2.988111658s)
--- PASS: TestFunctional/serial/LogsCmd (2.99s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd729674090/001/logs.txt
E0127 19:38:35.554518    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd729674090/001/logs.txt: (3.163934094s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-334000 config get cpus: exit status 14 (63.503279ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-334000 config get cpus: exit status 14 (62.543462ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-334000 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-334000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 7165: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
E0127 19:39:47.236212    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (726.57373ms)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:39:46.851290    7083 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:39:46.851451    7083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:39:46.851457    7083 out.go:309] Setting ErrFile to fd 2...
	I0127 19:39:46.851461    7083 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:39:46.851574    7083 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 19:39:46.852037    7083 out.go:303] Setting JSON to false
	I0127 19:39:46.871093    7083 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2360,"bootTime":1674874826,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 19:39:46.871188    7083 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:39:46.893587    7083 out.go:177] * [functional-334000] minikube v1.28.0 on Darwin 13.2
	I0127 19:39:46.940283    7083 notify.go:220] Checking for updates...
	I0127 19:39:46.962289    7083 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 19:39:46.983019    7083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 19:39:47.004269    7083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:39:47.025633    7083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:39:47.047446    7083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 19:39:47.106335    7083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 19:39:47.128925    7083 config.go:180] Loaded profile config "functional-334000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 19:39:47.129510    7083 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 19:39:47.192200    7083 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 19:39:47.192338    7083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 19:39:47.343399    7083 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 03:39:47.245715045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 19:39:47.387250    7083 out.go:177] * Using the docker driver based on existing profile
	I0127 19:39:47.408116    7083 start.go:296] selected driver: docker
	I0127 19:39:47.408133    7083 start.go:840] validating driver "docker" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-334000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:f
alse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:39:47.408230    7083 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 19:39:47.433322    7083 out.go:177] 
	W0127 19:39:47.454460    7083 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 19:39:47.476048    7083 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-334000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-334000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (728.154343ms)

                                                
                                                
-- stdout --
	* [functional-334000] minikube v1.28.0 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:39:48.382477    7121 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:39:48.382620    7121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:39:48.382625    7121 out.go:309] Setting ErrFile to fd 2...
	I0127 19:39:48.382629    7121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:39:48.382753    7121 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 19:39:48.383191    7121 out.go:303] Setting JSON to false
	I0127 19:39:48.402553    7121 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2362,"bootTime":1674874826,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0127 19:39:48.402654    7121 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:39:48.424725    7121 out.go:177] * [functional-334000] minikube v1.28.0 sur Darwin 13.2
	I0127 19:39:48.446634    7121 notify.go:220] Checking for updates...
	I0127 19:39:48.468221    7121 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 19:39:48.489176    7121 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	I0127 19:39:48.510302    7121 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:39:48.551985    7121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:39:48.573296    7121 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	I0127 19:39:48.594264    7121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 19:39:48.615436    7121 config.go:180] Loaded profile config "functional-334000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 19:39:48.615794    7121 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 19:39:48.681749    7121 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0127 19:39:48.681924    7121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0127 19:39:48.843282    7121 info.go:266] docker info: {ID:XCAM:233U:IDBC:CZDL:7XI4:H6O5:GF2W:UEZ3:QAV3:CHAS:H4H5:PY7S Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:56 SystemTime:2023-01-28 03:39:48.73874557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0127 19:39:48.885262    7121 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0127 19:39:48.906211    7121 start.go:296] selected driver: docker
	I0127 19:39:48.906229    7121 start.go:840] validating driver "docker" against &{Name:functional-334000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-334000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:f
alse portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:39:48.906370    7121 start.go:851] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 19:39:48.930322    7121 out.go:177] 
	W0127 19:39:48.972535    7121 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 19:39:49.014379    7121 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (19.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-334000 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-334000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-jnxbq" [45aefac6-310d-44b8-bf52-9f3ee0c131de] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0127 19:39:06.274711    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-jnxbq" [45aefac6-310d-44b8-bf52-9f3ee0c131de] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 13.008547275s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 service list
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 service --namespace=default --https --url hello-node
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 service --namespace=default --https --url hello-node: (2.026203836s)
functional_test.go:1476: found endpoint: https://127.0.0.1:50390
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 service hello-node --url --format={{.IP}}: (2.025714988s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 service hello-node --url
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 service hello-node --url: (2.025118646s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:50406
--- PASS: TestFunctional/parallel/ServiceCmd (19.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5864dd20-8953-4dcb-b2c1-810fd0193a45] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008318296s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-334000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-334000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9685300c-acca-48d8-abd5-65df22ff2e5d] Pending
helpers_test.go:344: "sp-pod" [9685300c-acca-48d8-abd5-65df22ff2e5d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [9685300c-acca-48d8-abd5-65df22ff2e5d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007963875s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-334000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-334000 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-334000 delete -f testdata/storage-provisioner/pod.yaml: (1.086241655s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c7588e9-d505-4511-8aa6-3c50c77f3e96] Pending
helpers_test.go:344: "sp-pod" [6c7588e9-d505-4511-8aa6-3c50c77f3e96] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [6c7588e9-d505-4511-8aa6-3c50c77f3e96] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009948908s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-334000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh -n functional-334000 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 cp functional-334000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd2078460286/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh -n functional-334000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-334000 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-p8zrg" [eadca744-21df-401e-ae54-75b7fd21b2b1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0127 19:38:45.794607    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-p8zrg" [eadca744-21df-401e-ae54-75b7fd21b2b1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.013591488s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-334000 exec mysql-888f84dd9-p8zrg -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-334000 exec mysql-888f84dd9-p8zrg -- mysql -ppassword -e "show databases;": exit status 1 (153.697003ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-334000 exec mysql-888f84dd9-p8zrg -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-334000 exec mysql-888f84dd9-p8zrg -- mysql -ppassword -e "show databases;": exit status 1 (217.754959ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-334000 exec mysql-888f84dd9-p8zrg -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-334000 exec mysql-888f84dd9-p8zrg -- mysql -ppassword -e "show databases;": exit status 1 (128.304283ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-334000 exec mysql-888f84dd9-p8zrg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.15s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/4406/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo cat /etc/test/nested/copy/4406/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/4406.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/4406.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/4406.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo cat /usr/share/ca-certificates/4406.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/44062.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/44062.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/44062.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo cat /usr/share/ca-certificates/44062.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-334000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-334000 ssh "sudo systemctl is-active crio": exit status 1 (614.419255ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 version -o=json --components
functional_test.go:2197: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 version -o=json --components: (1.142706686s)
--- PASS: TestFunctional/parallel/Version/components (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-334000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-334000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-334000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-334000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-334000 | ffd4cfbbe753e | 32.9MB |
| docker.io/localhost/my-image                | functional-334000 | 0d290fd6a608e | 1.24MB |
| docker.io/library/mysql                     | 5.7               | 9ec14ca3fec4d | 455MB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| docker.io/library/minikube-local-cache-test | functional-334000 | 4dfcb3fb17f0f | 30B    |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
2023/01/27 19:40:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-334000 image ls --format json:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"4dfcb3fb17f0f7e900415785727ddcd65b58bb818ed2dd11aaf960739cae071a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-334000"],"size":"30"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616df
be30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0d290fd6a608ea5ce7313ba69dde8da3ada4b333303cf5a0698dc50f437373a0","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-334000"],"size":"1240000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd","repoDigests":[],"repoTags":["docker.io/
library/mysql:5.7"],"size":"455000000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"ffd4cfbb
e753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-334000"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-334000 image ls --format yaml:
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 4dfcb3fb17f0f7e900415785727ddcd65b58bb818ed2dd11aaf960739cae071a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-334000
size: "30"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-334000
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-334000 ssh pgrep buildkitd: exit status 1 (400.281683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image build -t localhost/my-image:functional-334000 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 image build -t localhost/my-image:functional-334000 testdata/build: (3.524539158s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-334000 image build -t localhost/my-image:functional-334000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c76eb7c2c0bf
Removing intermediate container c76eb7c2c0bf
---> 9583aff78f4a
Step 3/3 : ADD content.txt /
---> 0d290fd6a608
Successfully built 0d290fd6a608
Successfully tagged localhost/my-image:functional-334000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.383909238s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-334000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-334000 docker-env) && out/minikube-darwin-amd64 status -p functional-334000"
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-334000 docker-env) && out/minikube-darwin-amd64 status -p functional-334000": (1.262493117s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-334000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000: (3.492709501s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000: (2.228813615s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (7.185570195s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-334000
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 image load --daemon gcr.io/google-containers/addon-resizer:functional-334000: (3.640845344s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image save gcr.io/google-containers/addon-resizer:functional-334000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 image save gcr.io/google-containers/addon-resizer:functional-334000 /Users/jenkins/workspace/addon-resizer-save.tar: (1.280308983s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image rm gcr.io/google-containers/addon-resizer:functional-334000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.611305897s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-334000
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 image save --daemon gcr.io/google-containers/addon-resizer:functional-334000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-334000 image save --daemon gcr.io/google-containers/addon-resizer:functional-334000: (2.640201283s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-334000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-334000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-334000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [52182983-7f6f-480f-adae-e8c7aadcf8bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [52182983-7f6f-480f-adae-e8c7aadcf8bf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.008697717s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-334000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-334000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 6751: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "434.092952ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "84.792558ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "440.781145ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "83.052127ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-334000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1870718991/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1674877175511398000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1870718991/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1674877175511398000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1870718991/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1674877175511398000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1870718991/001/test-1674877175511398000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (423.197751ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 28 03:39 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 28 03:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 28 03:39 test-1674877175511398000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh cat /mount-9p/test-1674877175511398000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-334000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ff3fa5d4-92c5-4cd6-85b1-7aad4beeac8b] Pending
helpers_test.go:344: "busybox-mount" [ff3fa5d4-92c5-4cd6-85b1-7aad4beeac8b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [ff3fa5d4-92c5-4cd6-85b1-7aad4beeac8b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [ff3fa5d4-92c5-4cd6-85b1-7aad4beeac8b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007915869s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-334000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-334000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port1870718991/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-334000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1748932394/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (421.288121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-334000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1748932394/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-334000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-334000 ssh "sudo umount -f /mount-9p": exit status 1 (395.900731ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-334000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-334000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port1748932394/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-334000
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-334000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-334000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.2s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-205000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-205000: (2.196828507s)
--- PASS: TestImageBuild/serial/NormalBuild (2.20s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-205000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-205000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-205000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.43s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-315000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0127 19:48:25.300671    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:48:44.651543    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-315000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (47.347696569s)
--- PASS: TestJSONOutput/start/Command (47.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-315000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-315000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-315000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-315000 --output=json --user=testUser: (5.863625094s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-883000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-883000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (359.414635ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d860e4d2-5210-465e-b395-86a648fcf044","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-883000] minikube v1.28.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"516da05b-12e2-4879-892b-a7b286ca1ad3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"5ba2cbbe-9518-4d6c-b15e-38595eafa06f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig"}}
	{"specversion":"1.0","id":"1e3a246e-d59a-465d-90de-9b449ab8e874","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"f2da2ea0-e91c-4cfc-9f8d-798bfbf7ca23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e451cd0a-2314-4c30-88d3-d8c8e6ec234e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube"}}
	{"specversion":"1.0","id":"6228f4e5-8edb-4c01-a583-219b0262538f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d40b4816-5d42-41fb-9806-61b7374906a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-883000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-883000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-607000 --network=
E0127 19:49:12.345832    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-607000 --network=: (32.896421601s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-607000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-607000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-607000: (2.644622429s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (42.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-840000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-840000 --network=bridge: (39.812595984s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-840000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-840000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-840000: (2.496327179s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (42.37s)

                                                
                                    
x
+
TestKicExistingNetwork (37.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-089000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-089000 --network=existing-network: (34.865488685s)
helpers_test.go:175: Cleaning up "existing-network-089000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-089000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-089000: (2.480159137s)
--- PASS: TestKicExistingNetwork (37.71s)

                                                
                                    
x
+
TestKicCustomSubnet (36.13s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-428000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-428000 --subnet=192.168.60.0/24: (33.329021275s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-428000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-428000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-428000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-428000: (2.744380427s)
--- PASS: TestKicCustomSubnet (36.13s)

                                                
                                    
x
+
TestKicStaticIP (33.13s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-969000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-969000 --static-ip=192.168.200.200: (30.287654349s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-969000 ip
helpers_test.go:175: Cleaning up "static-ip-969000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-969000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-969000: (2.587802011s)
--- PASS: TestKicStaticIP (33.13s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (70.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-196000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-196000 --driver=docker : (30.551721378s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-198000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-198000 --driver=docker : (32.453257718s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-196000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-198000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-198000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-198000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-198000: (2.606816074s)
helpers_test.go:175: Cleaning up "first-196000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-196000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-196000: (2.639111158s)
--- PASS: TestMinikubeProfile (70.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-916000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E0127 19:53:25.298498    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-916000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.025443443s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-916000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-939000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-939000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.148972268s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-939000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.15s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-916000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-916000 --alsologtostderr -v=5: (2.15352883s)
--- PASS: TestMountStart/serial/DeleteFirst (2.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-939000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-939000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-939000: (1.58910593s)
--- PASS: TestMountStart/serial/Stop (1.59s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-939000
E0127 19:53:44.663625    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-939000: (5.176773781s)
--- PASS: TestMountStart/serial/RestartStopped (6.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-939000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-151000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0127 19:54:48.372992    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-151000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m21.261187931s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-151000 -- rollout status deployment/busybox: (8.9286456s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-cc9jl -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-rv625 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-cc9jl -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-rv625 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-cc9jl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-rv625 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-cc9jl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-cc9jl -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-rv625 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-151000 -- exec busybox-6b86dd6d48-rv625 -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-151000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-151000 -v 3 --alsologtostderr: (21.498767486s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr: (1.121641126s)
--- PASS: TestMultiNode/serial/AddNode (22.62s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (15.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-151000 status --output json --alsologtostderr: (1.061508758s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp testdata/cp-test.txt multinode-151000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile215192911/001/cp-test_multinode-151000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000:/home/docker/cp-test.txt multinode-151000-m02:/home/docker/cp-test_multinode-151000_multinode-151000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m02 "sudo cat /home/docker/cp-test_multinode-151000_multinode-151000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000:/home/docker/cp-test.txt multinode-151000-m03:/home/docker/cp-test_multinode-151000_multinode-151000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m03 "sudo cat /home/docker/cp-test_multinode-151000_multinode-151000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp testdata/cp-test.txt multinode-151000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile215192911/001/cp-test_multinode-151000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000-m02:/home/docker/cp-test.txt multinode-151000:/home/docker/cp-test_multinode-151000-m02_multinode-151000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000 "sudo cat /home/docker/cp-test_multinode-151000-m02_multinode-151000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000-m02:/home/docker/cp-test.txt multinode-151000-m03:/home/docker/cp-test_multinode-151000-m02_multinode-151000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m03 "sudo cat /home/docker/cp-test_multinode-151000-m02_multinode-151000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp testdata/cp-test.txt multinode-151000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile215192911/001/cp-test_multinode-151000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000-m03:/home/docker/cp-test.txt multinode-151000:/home/docker/cp-test_multinode-151000-m03_multinode-151000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000 "sudo cat /home/docker/cp-test_multinode-151000-m03_multinode-151000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 cp multinode-151000-m03:/home/docker/cp-test.txt multinode-151000-m02:/home/docker/cp-test_multinode-151000-m03_multinode-151000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 ssh -n multinode-151000-m02 "sudo cat /home/docker/cp-test_multinode-151000-m03_multinode-151000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (15.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-151000 node stop m03: (1.539515388s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-151000 status: exit status 7 (792.121683ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-151000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-151000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr: exit status 7 (779.218507ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-151000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-151000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:56:08.159281   10924 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:56:08.159479   10924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:56:08.159484   10924 out.go:309] Setting ErrFile to fd 2...
	I0127 19:56:08.159488   10924 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:56:08.159610   10924 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 19:56:08.159796   10924 out.go:303] Setting JSON to false
	I0127 19:56:08.159819   10924 mustload.go:65] Loading cluster: multinode-151000
	I0127 19:56:08.159859   10924 notify.go:220] Checking for updates...
	I0127 19:56:08.160124   10924 config.go:180] Loaded profile config "multinode-151000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 19:56:08.160134   10924 status.go:255] checking status of multinode-151000 ...
	I0127 19:56:08.160531   10924 cli_runner.go:164] Run: docker container inspect multinode-151000 --format={{.State.Status}}
	I0127 19:56:08.220628   10924 status.go:330] multinode-151000 host status = "Running" (err=<nil>)
	I0127 19:56:08.220656   10924 host.go:66] Checking if "multinode-151000" exists ...
	I0127 19:56:08.220893   10924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-151000
	I0127 19:56:08.281578   10924 host.go:66] Checking if "multinode-151000" exists ...
	I0127 19:56:08.281860   10924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 19:56:08.281921   10924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-151000
	I0127 19:56:08.341778   10924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51364 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/multinode-151000/id_rsa Username:docker}
	I0127 19:56:08.434541   10924 ssh_runner.go:195] Run: systemctl --version
	I0127 19:56:08.438976   10924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 19:56:08.448580   10924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-151000
	I0127 19:56:08.507922   10924 kubeconfig.go:92] found "multinode-151000" server: "https://127.0.0.1:51368"
	I0127 19:56:08.507949   10924 api_server.go:165] Checking apiserver status ...
	I0127 19:56:08.508008   10924 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 19:56:08.518730   10924 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1840/cgroup
	W0127 19:56:08.528444   10924 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1840/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 19:56:08.528524   10924 ssh_runner.go:195] Run: ls
	I0127 19:56:08.533120   10924 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51368/healthz ...
	I0127 19:56:08.538195   10924 api_server.go:278] https://127.0.0.1:51368/healthz returned 200:
	ok
	I0127 19:56:08.538207   10924 status.go:421] multinode-151000 apiserver status = Running (err=<nil>)
	I0127 19:56:08.538217   10924 status.go:257] multinode-151000 status: &{Name:multinode-151000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 19:56:08.538232   10924 status.go:255] checking status of multinode-151000-m02 ...
	I0127 19:56:08.538470   10924 cli_runner.go:164] Run: docker container inspect multinode-151000-m02 --format={{.State.Status}}
	I0127 19:56:08.597810   10924 status.go:330] multinode-151000-m02 host status = "Running" (err=<nil>)
	I0127 19:56:08.597834   10924 host.go:66] Checking if "multinode-151000-m02" exists ...
	I0127 19:56:08.598109   10924 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-151000-m02
	I0127 19:56:08.659112   10924 host.go:66] Checking if "multinode-151000-m02" exists ...
	I0127 19:56:08.659387   10924 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 19:56:08.659442   10924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-151000-m02
	I0127 19:56:08.718324   10924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51444 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3092/.minikube/machines/multinode-151000-m02/id_rsa Username:docker}
	I0127 19:56:08.810598   10924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 19:56:08.820320   10924 status.go:257] multinode-151000-m02 status: &{Name:multinode-151000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 19:56:08.820369   10924 status.go:255] checking status of multinode-151000-m03 ...
	I0127 19:56:08.820646   10924 cli_runner.go:164] Run: docker container inspect multinode-151000-m03 --format={{.State.Status}}
	I0127 19:56:08.879777   10924 status.go:330] multinode-151000-m03 host status = "Stopped" (err=<nil>)
	I0127 19:56:08.879797   10924 status.go:343] host is not running, skipping remaining checks
	I0127 19:56:08.879804   10924 status.go:257] multinode-151000-m03 status: &{Name:multinode-151000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-151000 node start m03 --alsologtostderr: (9.374630649s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-151000 status: (1.083730201s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-151000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-151000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-151000: (23.11602898s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr: (1m5.3015404s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-151000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-151000 node delete m03: (5.339327971s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-151000 stop: (21.663157301s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-151000 status: exit status 7 (174.597514ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-151000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr: exit status 7 (173.829881ms)

                                                
                                                
-- stdout --
	multinode-151000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-151000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:58:16.307183   11485 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:58:16.307341   11485 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:58:16.307346   11485 out.go:309] Setting ErrFile to fd 2...
	I0127 19:58:16.307351   11485 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:58:16.307467   11485 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3092/.minikube/bin
	I0127 19:58:16.307648   11485 out.go:303] Setting JSON to false
	I0127 19:58:16.307673   11485 mustload.go:65] Loading cluster: multinode-151000
	I0127 19:58:16.307724   11485 notify.go:220] Checking for updates...
	I0127 19:58:16.308015   11485 config.go:180] Loaded profile config "multinode-151000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 19:58:16.308028   11485 status.go:255] checking status of multinode-151000 ...
	I0127 19:58:16.308457   11485 cli_runner.go:164] Run: docker container inspect multinode-151000 --format={{.State.Status}}
	I0127 19:58:16.366718   11485 status.go:330] multinode-151000 host status = "Stopped" (err=<nil>)
	I0127 19:58:16.366751   11485 status.go:343] host is not running, skipping remaining checks
	I0127 19:58:16.366757   11485 status.go:257] multinode-151000 status: &{Name:multinode-151000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 19:58:16.366776   11485 status.go:255] checking status of multinode-151000-m02 ...
	I0127 19:58:16.367018   11485 cli_runner.go:164] Run: docker container inspect multinode-151000-m02 --format={{.State.Status}}
	I0127 19:58:16.423152   11485 status.go:330] multinode-151000-m02 host status = "Stopped" (err=<nil>)
	I0127 19:58:16.423177   11485 status.go:343] host is not running, skipping remaining checks
	I0127 19:58:16.423186   11485 status.go:257] multinode-151000-m02 status: &{Name:multinode-151000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0127 19:58:25.317664    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
E0127 19:58:44.669986    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-151000 --wait=true -v=8 --alsologtostderr --driver=docker : (52.891189155s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-151000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.82s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-151000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-151000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-151000-m02 --driver=docker : exit status 14 (401.671628ms)

                                                
                                                
-- stdout --
	* [multinode-151000-m02] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-151000-m02' is duplicated with machine name 'multinode-151000-m02' in profile 'multinode-151000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-151000-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-151000-m03 --driver=docker : (33.902185122s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-151000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-151000: exit status 80 (492.039013ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-151000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-151000-m03 already exists in multinode-151000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-151000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-151000-m03: (2.660286171s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.52s)

                                                
                                    
x
+
TestPreload (123.97s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-373000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0127 20:00:07.722210    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-373000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m4.042045101s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-373000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-373000 -- docker pull gcr.io/k8s-minikube/busybox: (2.230245624s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-373000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-373000: (10.941589683s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-373000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-373000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (43.48205233s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-373000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-373000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-373000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-373000: (2.82061212s)
--- PASS: TestPreload (123.97s)

                                                
                                    
x
+
TestScheduledStopUnix (108.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-630000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-630000 --memory=2048 --driver=docker : (34.020514492s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-630000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-630000 -n scheduled-stop-630000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-630000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-630000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-630000 -n scheduled-stop-630000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-630000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-630000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0127 20:03:25.315600    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-630000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-630000: exit status 7 (121.000885ms)

                                                
                                                
-- stdout --
	scheduled-stop-630000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-630000 -n scheduled-stop-630000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-630000 -n scheduled-stop-630000: exit status 7 (113.837839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-630000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-630000
E0127 20:03:44.665808    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-630000: (2.331101215s)
--- PASS: TestScheduledStopUnix (108.38s)

                                                
                                    
x
+
TestSkaffold (67.56s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2942500704 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-071000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-071000 --memory=2600 --driver=docker : (35.700343296s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2942500704 run --minikube-profile skaffold-071000 --kube-context skaffold-071000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2942500704 run --minikube-profile skaffold-071000 --kube-context skaffold-071000 --status-check=true --port-forward=false --interactive=false: (17.254008694s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-697b5bd889-8529p" [bdd91914-d7ef-4a44-a96a-961e2add2a7f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012042205s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-597444f7dd-ns27p" [d380a0a5-1bee-4d54-a67f-b11d640a22f1] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009589187s
helpers_test.go:175: Cleaning up "skaffold-071000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-071000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-071000: (2.93707254s)
--- PASS: TestSkaffold (67.56s)

                                                
                                    
x
+
TestInsufficientStorage (15.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-245000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-245000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (12.374283322s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1ce117d4-5c50-4fdc-98d4-c1f14e4ba991","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-245000] minikube v1.28.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6446700-c9e6-470f-a577-60758aafb66a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"9c3cd9f6-ce14-4248-a7d2-edc5e2394ef2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig"}}
	{"specversion":"1.0","id":"8cb7878e-9cf9-4e63-8da0-727c8db02a80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"323c462f-bef3-4530-a2b2-66c05c7742c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ec835607-6434-40de-bf85-09da09d8f498","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube"}}
	{"specversion":"1.0","id":"a179421a-4a45-402a-b2d5-c2c6a5107df7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e99c5cb0-9e97-4fe7-b364-e109a1a1519a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6fd9f83c-d4fa-4024-9ac6-155cee18985b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f532adc0-1959-45a2-8927-dd2f89c71a83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8548fe84-b8f4-4544-82d4-3d30a37af0f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"9b7f6019-0f86-43c4-9997-053f905c06a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-245000 in cluster insufficient-storage-245000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f43c0840-4561-4476-9618-bbd8377b5b47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cfcfb2c3-f9a1-4efa-ae63-bfa82fab9d1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"918e6c6a-46fa-4924-8aa2-97186c38f0fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-245000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-245000 --output=json --layout=cluster: exit status 7 (415.88054ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-245000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-245000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:05:05.266912   13310 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-245000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-245000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-245000 --output=json --layout=cluster: exit status 7 (414.326647ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-245000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-245000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:05:05.682351   13322 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-245000" does not appear in /Users/jenkins/minikube-integration/15565-3092/kubeconfig
	E0127 20:05:05.691699   13322 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/insufficient-storage-245000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-245000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-245000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-245000: (2.422330461s)
--- PASS: TestInsufficientStorage (15.63s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.99s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3255229993/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3255229993/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3255229993/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3255229993/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (6.99s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (12.05s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2633902716/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2633902716/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2633902716/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2633902716/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (12.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-832000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-832000: (3.590466966s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                    
x
+
TestPause/serial/Start (47.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-160000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0127 20:12:23.386615    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-160000 --memory=2048 --install-addons=false --wait=all --driver=docker : (47.044482125s)
--- PASS: TestPause/serial/Start (47.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.83s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-160000 --alsologtostderr -v=1 --driver=docker 
E0127 20:13:25.332591    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/addons-492000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-160000 --alsologtostderr -v=1 --driver=docker : (47.816214099s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.83s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-160000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-160000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-160000 --output=json --layout=cluster: exit status 2 (434.634258ms)

                                                
                                                
-- stdout --
	{"Name":"pause-160000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-160000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-160000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-160000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.68s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-160000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-160000 --alsologtostderr -v=5: (2.682798275s)
--- PASS: TestPause/serial/DeletePaused (2.68s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-160000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-160000: exit status 1 (55.519534ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-160000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-220000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-220000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (380.224002ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-220000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3092/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3092/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-220000 --driver=docker 
E0127 20:13:44.684751    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-220000 --driver=docker : (32.773126859s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-220000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-220000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-220000 --no-kubernetes --driver=docker : (6.4014079s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-220000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-220000 status -o json: exit status 2 (437.379497ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-220000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-220000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-220000: (2.45743057s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-220000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-220000 --no-kubernetes --driver=docker : (7.425395432s)
--- PASS: TestNoKubernetes/serial/Start (7.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-220000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-220000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (398.897695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
E0127 20:14:39.546195    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (15.219193613s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (15.450159358s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-220000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-220000: (1.570103893s)
--- PASS: TestNoKubernetes/serial/Stop (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-220000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-220000 --driver=docker : (4.983456203s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-220000 "sudo systemctl is-active --quiet service kubelet"
E0127 20:15:07.226777    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-220000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (390.394333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (46.317343351s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-259000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-lfdld" [ae4fb62c-6285-4469-a913-b41babec6a25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-lfdld" [ae4fb62c-6285-4469-a913-b41babec6a25] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.005634942s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0127 20:16:47.739149    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m15.046354237s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-r8fmm" [99a3cb39-885f-457e-b136-bd4ed41660f5] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017980779s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (20.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-259000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-nvkcw" [52ec3938-5832-466f-b5ad-f598eee3387a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-nvkcw" [52ec3938-5832-466f-b5ad-f598eee3387a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 20.007366743s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (20.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
E0127 20:18:44.685057    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (1m4.728091432s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (48.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
E0127 20:19:39.544571    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (48.332298016s)
--- PASS: TestNetworkPlugins/group/false/Start (48.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (20.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-259000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-x2446" [422abfe9-dac9-4c11-867b-db8361b738d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
helpers_test.go:344: "netcat-694fc96674-x2446" [422abfe9-dac9-4c11-867b-db8361b738d8] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 20.00839037s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (20.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (16.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-259000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-g54h2" [7eca3221-051d-4454-9138-934cccb343c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-g54h2" [7eca3221-051d-4454-9138-934cccb343c6] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 16.007944265s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (16.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (58.437029405s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
E0127 20:20:56.580062    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:56.585169    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:56.595262    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:56.615634    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:56.655765    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:56.736039    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:56.896137    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:57.216348    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:57.856601    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:20:59.137006    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:21:01.698502    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:21:06.819058    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:21:17.059309    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (50.991195196s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bkhvz" [b0aab1b7-6d09-4a96-8f2b-8a99e94d6b47] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.015039586s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-z5kc8" [c25a28d4-db1b-4fa5-bd99-b528b8456054] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016810243s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (18.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-259000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-g6s5q" [2dbf1435-8d01-4b86-9a0b-9cb27481ccee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:344: "netcat-694fc96674-g6s5q" [2dbf1435-8d01-4b86-9a0b-9cb27481ccee] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 18.006849449s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (18.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (18.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-259000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2qb2z" [9d111e3c-bfce-42fe-a49f-90d3c798f0e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 20:21:37.539892    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-2qb2z" [9d111e3c-bfce-42fe-a49f-90d3c798f0e1] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 18.009012659s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (18.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (48.796447777s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0127 20:22:51.187553    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:51.192696    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:51.202935    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:51.223362    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:51.263474    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:51.343840    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:51.504564    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:51.825883    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:52.466089    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:53.746203    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:22:56.307625    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:23:01.428100    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (48.225054988s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-259000 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-qpfkz" [4008d21f-a87e-4053-874c-ab391a0b6cf7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:344: "netcat-694fc96674-qpfkz" [4008d21f-a87e-4053-874c-ab391a0b6cf7] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.009615928s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-259000 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-8mchf" [2ea6bf96-1824-4b3b-925d-9e70854dc427] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 20:23:11.668176    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:344: "netcat-694fc96674-8mchf" [2ea6bf96-1824-4b3b-925d-9e70854dc427] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.011724326s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (50.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-259000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (50.569086028s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (50.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-259000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (18.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-259000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-95jfb" [8c7adf6d-81ec-4bb4-9978-e9b956b81d56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 20:24:39.543023    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:24:50.063688    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:50.069461    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:50.079611    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:50.100175    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:50.140568    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:50.220903    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:50.350948    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:50.356017    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:50.367334    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:50.380972    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:50.387372    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:50.427956    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:50.508788    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:50.668928    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:50.701545    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:50.989939    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:51.342380    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:51.631520    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-95jfb" [8c7adf6d-81ec-4bb4-9978-e9b956b81d56] Running
E0127 20:24:52.622548    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:52.913850    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:24:55.183987    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:24:55.476081    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 18.007894743s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (18.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-259000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-259000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-711000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0127 20:25:31.025266    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:25:31.319035    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:25:35.028888    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:25:56.578884    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
E0127 20:26:02.584433    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/skaffold-071000/client.crt: no such file or directory
E0127 20:26:11.985630    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:26:12.279716    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-711000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (56.869227427s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-711000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [470d12ba-359d-407e-bb9e-3008a7dcdb99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [470d12ba-359d-407e-bb9e-3008a7dcdb99] Running
E0127 20:26:24.266149    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.01400873s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-711000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-711000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-711000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-711000 --alsologtostderr -v=3
E0127 20:26:28.842668    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:28.848040    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:28.858217    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:28.878460    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:28.918799    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:29.000243    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:29.229231    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:29.549925    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:30.176921    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:30.182152    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:30.191346    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:30.192355    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:30.212459    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:30.254611    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:30.334893    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:30.495357    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:30.816285    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:31.459219    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:31.474824    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:32.742753    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:34.038818    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:35.306661    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:39.164966    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-711000 --alsologtostderr -v=3: (11.031902695s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-711000 -n no-preload-711000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-711000 -n no-preload-711000: exit status 7 (118.272733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-711000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (581.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-711000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0127 20:26:40.434428    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:26:49.414493    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:26:50.682977    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:27:09.900603    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:27:11.170365    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
E0127 20:27:33.951892    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/custom-flannel-259000/client.crt: no such file or directory
E0127 20:27:34.247934    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/false-259000/client.crt: no such file or directory
E0127 20:27:50.863960    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:27:51.233878    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/calico-259000/client.crt: no such file or directory
E0127 20:27:52.133118    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-711000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (9m40.65030431s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-711000 -n no-preload-711000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (581.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-720000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-720000 --alsologtostderr -v=3: (1.660586143s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-720000 -n old-k8s-version-720000: exit status 7 (117.757439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-720000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-5cwqs" [e88ef39e-889b-4822-8c56-64d605fdd394] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013145029s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-5cwqs" [e88ef39e-889b-4822-8c56-64d605fdd394] Running
E0127 20:36:28.875387    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/flannel-259000/client.crt: no such file or directory
E0127 20:36:30.205873    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/kindnet-259000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008042585s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-711000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-711000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-711000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-711000 -n no-preload-711000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-711000 -n no-preload-711000: exit status 2 (432.032334ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-711000 -n no-preload-711000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-711000 -n no-preload-711000: exit status 2 (435.921003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-711000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-711000 -n no-preload-711000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-711000 -n no-preload-711000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-216000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0127 20:37:19.668647    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/auto-259000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-216000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (48.995805488s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-216000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [047f6148-9714-4c9e-a214-5ddd35969943] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [047f6148-9714-4c9e-a214-5ddd35969943] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.012982663s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-216000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-216000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-216000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-216000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-216000 --alsologtostderr -v=3: (11.00882621s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-216000 -n embed-certs-216000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-216000 -n embed-certs-216000: exit status 7 (235.340194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-216000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (556.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-216000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-216000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (9m15.983390497s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-216000 -n embed-certs-216000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (556.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-bpx4n" [b9bc08e2-d634-4f4a-a75e-654d8c688bd4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013192819s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-bpx4n" [b9bc08e2-d634-4f4a-a75e-654d8c688bd4] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01005644s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-216000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-216000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-216000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-216000 -n embed-certs-216000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-216000 -n embed-certs-216000: exit status 2 (435.218727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-216000 -n embed-certs-216000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-216000 -n embed-certs-216000: exit status 2 (437.300789ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-216000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-216000 -n embed-certs-216000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-216000 -n embed-certs-216000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-500000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-500000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (52.728314457s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-500000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1f04012e-eedb-4d96-b19b-00aac4c33dee] Pending
helpers_test.go:344: "busybox" [1f04012e-eedb-4d96-b19b-00aac4c33dee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1f04012e-eedb-4d96-b19b-00aac4c33dee] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.014962491s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-500000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-500000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-500000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-500000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-500000 --alsologtostderr -v=3: (10.92013586s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000: exit status 7 (117.786266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-500000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-500000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0127 20:48:44.740981    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-500000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (5m4.012474809s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-5l99w" [f29eb8ff-f814-4ab0-aa62-8e51ca8f6545] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0127 20:53:44.744128    4406 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3092/.minikube/profiles/functional-334000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-5l99w" [f29eb8ff-f814-4ab0-aa62-8e51ca8f6545] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.013024932s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-5l99w" [f29eb8ff-f814-4ab0-aa62-8e51ca8f6545] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007096153s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-500000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-500000 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-500000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000: exit status 2 (445.196719ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000: exit status 2 (433.380651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-500000 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-500000 -n default-k8s-diff-port-500000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-686000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-686000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (44.637584547s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-686000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-686000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-686000 --alsologtostderr -v=3: (10.98948711s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-686000 -n newest-cni-686000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-686000 -n newest-cni-686000: exit status 7 (120.472715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-686000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-686000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-686000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (25.023420341s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-686000 -n newest-cni-686000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-686000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-686000 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-686000 -n newest-cni-686000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-686000 -n newest-cni-686000: exit status 2 (441.138977ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-686000 -n newest-cni-686000

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-686000 -n newest-cni-686000: exit status 2 (439.455464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-686000 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-686000 -n newest-cni-686000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-686000 -n newest-cni-686000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 11.022204ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-2bxz9" [cd745884-672f-4081-ae99-efef0540b69a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011224823s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qpd7p" [4e166f5d-4989-4d0b-9df6-8657f603edfa] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009666748s
addons_test.go:305: (dbg) Run:  kubectl --context addons-492000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-492000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Done: kubectl --context addons-492000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.545188183s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-492000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-492000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-492000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7ad3126a-04ff-4b31-b1dc-2eefc942a99c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [7ad3126a-04ff-4b31-b1dc-2eefc942a99c] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006966507s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-492000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.33s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-334000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-334000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-66bl6" [481aba60-a7df-43ed-8457-e89ef47a30ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-66bl6" [481aba60-a7df-43ed-8457-e89ef47a30ae] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00894227s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-259000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-259000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-259000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-259000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-259000"

                                                
                                                
----------------------- debugLogs end: cilium-259000 [took: 6.353995364s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-259000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-259000
--- SKIP: TestNetworkPlugins/group/cilium (6.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-412000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-412000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard